Monday, November 23, 2009

Brazilian Voting Machine Attacked Via Radio Monitoring

I'd like to make one point before diving into the details. And this is the reason why I am posting this story. Attackers are very clever. If you are designing a critical system that will be exposed to large numbers of people or handle sensitive transactions, then make sure you are approaching security correctly. Develop threat models, ensure secure design practices are used, train your developers to code securely, test your application for flaws, etc. Security is an entire process and mindset, not just something you can "address at the end". If you skip out on any of these items then it is just a matter of time before an attacker finds and exploits a security flaw.

And now, on to the story....

To test the new voting systems in place in Brazil, Tribunal Superior Eleitoral (TSE) hosted a hacking challenge. The team which most effectively violates the security of the system would win 5,000 R$.

The results are now in and it looks like the system did pretty well overall. Initially it was reported that none of the contestants were able to compromise the systems security. However, it was eventually revealed that one contestant, Sergio Freitas da Silva, was able to compromise the secrecy of votes by monitoring radio waves emitted as the user typed on the keyboard (Van Eck Phreaking)
"As I typed in the ballot box, tracked by radio to see if it detects any interference. I was able to track the interference that caused the wave, recording a WAV file with these sounds," he explains.

Sergio explained that after recording the sounds the buttons of the electronic ballot box have on the wave you can decode the sounds, which lead to the discovery of the candidates chosen by voters, shattering his confidence. [article]


There was some push back on the validity of this attack since it required the observer to be in close proximity to the system as the user typed on the keyboard. Sergio made the argument that a strong antenna and higher quality monitoring equipment would allow the attacker to observe from much greater distances.

Let's put things in perspective though. This is not a new attack. The Van Eck Phreaking attack has been documented since at least 1985 and the impacts of electronic emanations have been studied since at least the 1960s (TEMPEST). None-the-less, my hat is off to all of the contestants. Its only through challenges like this and secure code review that we can begin to uncover security flaws present in these critical systems.



-Michael Coates

Saturday, November 21, 2009

The OWASP Mission

Original document at owasp.org

OWASP AppSec DC 2009 Conference
Jeff Williams, OWASP Board Chair
The OWASP Mission

First I’d like to introduce the OWASP Board (Tom, Dave, Dinis,
Seba, and myself)
The board runs the OWASP Foundation, the 501c3 nonprofit which
provides support for all the activities that happen at OWASP. Like all
the people involved in OWASP, we volunteer our time to make the
project a success. I’d like to take this opportunity to thank each of
you for all the hard work you do to make OWASP a success.
I’d also like to thank Joe for the thoughtful keynote and for focusing
on the entire software supply chain. His focus on malicious intent is
right on and I’ll be talking about that extensively tomorrow.
If you combine all the materials available through his program and
what’s available at OWASP, we’ve got ALL the right stuff out there.
But we are still losing ground.

For years, we have watched as the software market fails to
produce secure applications.
Increasingly, this situation is worsening and there are two key
factors. First, the reliance that we put on our software infrastructure
increases every day. Application software controls our finances,
healthcare information, legal records, and even our military
defenses. Secondly, application software is growing and
interconnecting at an unprecedented rate. The sheer size and
complexity of our software infrastructure are staggering and
present novel security challenges every day.
While we have made some progress in security over the last decade,
our efforts have been almost completely eclipsed by these factors.
The software market and security experts still struggle to eliminate
even simple well-understood problems. Take cross-site scripting
(XSS) for example. In the last decade, XSS has grown from a
curiousity to a problem to an epidemic. Today, XSS has surpassed
the buffer overflow as the most prevalent security vulnerability of all
time. It’s the same for SQL injection. And CSRF will follow the same
pattern too.
These problems, while technically simple, have proven
extraordinarily difficult to eradicate. We can no longer afford to
tolerate software that contains this kind of easily discovered and
exploited vulnerabilities. Read about the RBS WorldPay attack from
this week – the level of coordination and sophistication required to
pull off this attack are stunning.
In addition to risks like this, we are already seriously limiting
innovation in the development of applications that can improve the
world.

Why doesn’t the software market produce secure software?
It’s possible that the risks we focus on are overblown and that the
market is actually working to produce an optimal level of security in
our applications. But the other possibility is that the software
market is broken. Despite what you might hear in economics class,
markets are not perfect. They have failures like monopolies, pricefixing,
and speculative bubbles.
One classic market problem was detailed in a Nobel Prize winning
paper by George Akerlof called “The Market for Lemons.” Basically
he showed that when sellers have more information than buyers –
like when you’re selling your used car that barely runs – buyers will
discount the price they’re willing to pay. That means people with
good cars can’t get a fair price and so they won’t sell. And that
means you can only buy lemons in the used car market.
Now think about that for software. Buyers really can’t tell the
difference between secure software and insecure software. So
they’re not willing to pay more for security.
We need radical innovative ideas to fix the software market. We are
not going to “hack our way secure” – it’s going to take a culture
change.
The automobile industry made the change over at 30 year period
after Ralph Nader exposed the industry….and today we have cars
that have safety features. The food industry made the change but
only after the FDA started the Nutrition Facts program. Even the
cigarette industry has been dramatically changed through
campaigns like the “Truth…” campaign.
The OWASP mission is to make application security visible. Creating
transparency goes directly to the heart of what is wrong with the
software market and has the potential to actually change the game.

Why is OWASP the right approach?
OWASP is a worldwide free and open community focused on
improving the security of application software. Everyone is free to
participate in OWASP and all of our materials are available under a
free and open software license.
In many ways, we’re like public radio. This allows us to reach a very
broad audience and it makes it possible for us to avoid difficult
commercial relationships that influence our activities. This freedom
from commercial pressures allows us to provide unbiased, practical,
cost-effective information about application security.
I believe this objectivity is absolutely critical. For too long, much of
the appsec information in the market has come from people selling
stuff, and our message has been lost.

What is OWASP doing?
Yesterday, OWASP Leaders from around the world got together to
discuss our progress and set our priorities for 2010. Each of our
Global Committees reviewed their accomplishments and we
discussed the agenda for the future. We just established these
committees last year and they are already making huge progress
establishing the foundation we need to achieve our mission.
Before I ask Tom to review our 2010 agenda,

Friday, November 20, 2009

IE8 XSS Filter Bug

The register just ran an article (IE8 bug makes 'safe' sites unsafe) talking about a flaw in Internet Explorer 8's XSS filtering. I have researched the IE8 filter in the past and provided some of my thoughts on the matter.

As the article correctly states, I'm not aware of the actual flaw that has been discovered. According to the article, the flaw was made available to Microsoft several months ago and we can presume Microsoft is actively working on a solution.

With that said, I thought I would discuss some of the technical anomalies of IE8 XSS filter so that an organization can begin to evaluate if they should, at least temporarily, disable the IE8 XSS protection for the users of their site.

The intent of IE8's xss filter is to provide a feature which "makes reflected / “Type-1” Cross-Site Scripting (XSS) vulnerabilities much more difficult to exploit from within Internet Explorer 8." [blogs.msdn.com] I believe this is a noble goal which is similar to the noScript plugin for Firefox. The blog on msdn.com has a good example of the filter working as intended. The demo application has a reflected XSS vulnerability when accepting user data from the URL and returning it to the page without output encoding - classic XSS. IE8 xss filter detects this and safely renders the attack harmless.

How The Filter Works

Now let's take a look at how the protection works in the field. Luckily we have two sites available to illustrate the functionality. Google has turned off the XSS filter, with the header X-XSS-Protection: 0, whereas Yahoo has allowed IE8 to use the XSS filter as designed.

The following two links illustrate the changes that are made by the XSS filter when an attack is detected. Each link is the URL when searching for test<script>

http://www.google.com/search?hl=en&q=test%3Cscript%3E

http://search.yahoo.com/search?p=test%3Cscript%3E
[edit: 1/13/2010: It looks like Yahoo has also decided to disable IE8 XSS protection. Therefore the above link will not work to illustrate this example. Glad I captured the screenshot below. However, you can check out a similar example with Facebook]



The screenshot above shows the results when following the yahoo link. The reason that we see JavaScript on the rendered page is because the IE8 filter has performed a blanket replace of <script> with <sc#ipt> throughout the entire response. This does in fact render most XSS attacks inert, but it also has the unintended consequence of disabling all JavaScript on the resulting page.

Here is a snippet of the html from the Yahoo page. The change made by the filter is highlighted in bold. If you were to search through the entire response you would see that all <script> have been replaced with <sc#ipt>. Also, the final line with the "Search results" is an html entity encoding of the search value. This is performed by the yahoo page and unrelated to the IE8 filter. This is just yahoo practicing good design.

<html lang="en"><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=8"><sc#ipt>(function(){var h=document.documentElement;h.className+=" js";(new Image()).src='http://a.l.yimg.com/a/i/us/sch/gr4/srp_metro_20090910.png';})();</script><link rel="alternate" type="application/rss+xml" title="Yahoo! Search results for test&lt;script&gt;"

What's the Risk with the Filter?

There are two possible concerns that should be considered:
1. Is there a potential flaw in the output encoding that is being performed by the filter?
2. Does the act of disabling all script tags throughout the page actually introduce a new vulnerability?

#1
If an attacker could determine a flaw in the output encoding (which at this point is the translation of <script> to <sc#ipt>) then the attacker could potentially craft a value that would would evade detection by the filter. Alternatively, it may be possible to identify a weakness in the actual translation which allows an attacker to insert a particular value that will become malicious as a result of the translation.

A very basic example of this concept is a regex that would remove the first instance of the word "script" from a tag. If an attacker submitted <script> this imaginary filter would output <>. This would stop a basic attack. However, if an attacker submitted <scriptscript>then the resulting value would be <script> - which would be malicious. This is the idea of potential flaw #1.

#2
The second concern is that disabling JavaScript throughout the page will inadvertently introduce a new vulnerability. Consider a scenario where an application relies heavily on AJAX. What would happen if JavaScript was suddenly disabled as a result of the XSS filter? More than likely this would just break the page. This isn't a security concern, but a usability concern. I think I'm ok with the trade-off of usability and security for this example.

But what about a scenario where the application is using JavaScript as a security control to protect the user. (We all agree JavaScript cannot be used to protect the application from a user, but there are some possible scenarios where it could be used to protect the user from the content). For this scenario we will consider some sort of mashup application which uses JavaScript to perform output encoding on data from third party sources. For whatever reason the application made a design decision that the output encoding would be performed by client side JavaScript. In this scenario, IE8's disabling of script tags throughout the page could actually disable security related JavaScript code. Could this possibly allow malicious mash-up content from the third party source to now execute?

Should Your Site Disable The IE8 XSS Filter?

I wouldn't rush to judgment and disable the filter. At this point we have word that there is a potential weakness and it is being addressed by Microsoft. We don't know of a public exploit at this time and hence can't thoroughly evaluate the impact to our respective applications. I think it would be prudent to review the impact of the XSS filter on your particular application and determine the effects of suddenly disabling the script tags within the page. More than likely this will result in the page not functioning correctly. But hey, that's not so bad if it protects the user from XSS compromise.

-Michael Coates

Wednesday, November 11, 2009

Watch AppSecDC Live

Unable to make it to OWASP AppSec DC this week? Watch it live below.



Follow the twitter stream at #AppSecDC

-Michael Coates

Thursday, November 5, 2009

Yet Another SSL/TLS Vulnerability Released

Another SSL/TLS vulnerability has been recently released. This weakness appears to affect applications which use client side certificates for user authentication. More specifically, the weakness lies in the renegotiation feature. For many people, this will not be an issue, since client side certificates are rarely used with large Internet facing applications.

However, some of the more secure applications do rely on client side certificates for two-factor authentication. These groups should take notice and start preparing to implement any fixes when they are available.

According to the Register article, this issue has been known since September and key players have been working to develop a solution. A new proposal is expected to be submitted to IETF today.

Here are the links so far. Anyone out there have any more info at this time?

Register Article
Martin Rex Related Security Research & Response
Analysis by Ivan Ristic

-Michael Coates


Image source:
http://www.flickr.com/photos/subcircle/500995147/
http://subcircle.co.uk

OWASP Application Security Conference - DC

I really don't have to try to convince anyone. This is more of a last call notice. The upcoming OWASP DC conference is going to be great! But in the event you've been a small dark box for the last 6 months, here is the info once again.




Conference

Schedule Day 1
Schedule Day 2

Register

I'll be there and speaking on Day 1 (AppSensor, SSL/TLS).

Hit me up if you attend @_mwc


-Michael Coates

Tuesday, November 3, 2009

AppSensor Project Featured on OWASP Podcast 51

The OWASP AppSensor Podcast is now available online! This podcast was recorded at OWASP AppSec EU Poland in May of this year.

Have a listen

Full OWASP Podcast List

Interested in AppSensor? Check out my upcoming talk at OWASP DC - Defend Yourself: Integrating Real Time Defenses into Online Applications


-Michael Coates

Monday, November 2, 2009

HTTPS Data Exposure - GET vs POST

Here is a quick chart showing the data exposure when considering GET vs POST and also HTTP vs HTTPS. The secure choice for transmission of any sensitive data is to use POST statements over SSL/TLS. Any other option will expose data at some point in the communication.



  • URL arguments refer to arguments in the URL for GET or POST (e.g. foo.com?arg1=something).
  • Body arguments refer to data communicated via POST paramaters in the HTTP request body.
This chart does not address client side caching of temporary files. Caching is a separate issue from the protocol selection and should be addressed with appropriate cache-control headers.