The New USB Exploit: Dangerous and Undetectable

usb

Image by Tasha Chawner under the Creative Commons License

Passing around USB sticks is one of the most common ways to share files between friends and associates. However, it is also a very common way to get viruses and other problems onto your computer. If you were not already aware, there are many viruses that can be put on a USB and set to run whenever the USB is inserted into a computer. Most of these viruses are relatively easy to catch and prevent with proper up-to-date anti-virus software. However, two security researchers, Karsten Nohl and Jacob Lell, have found a way to tamper with how the USB fundamentally communicates with the computer, also called the firmware. This means that the researchers can corrupt the USB’s firmware and make it do things it’s not supposed to, such as invisibly alter text files or redirect your internet traffic. This is a problem because their malware, called BadUSB, does not reside in the USB’s flash memory, which is where all your files (and normally, viruses) reside. Since BadUSB doesn’t reside in flash memory, the malware is virtually impossible to detect, as there is no standardized version of the USB’s firmware to check against. Another important note is that an infected computer could then infect any USB device plugged into it; in other words, the problem goes both ways. This news reinforces the idea that we, as users, need to know where a USB has been before plugging it right into our computers, and that it only takes one careless user to infect an otherwise safe system.

 

 

Source: Greenberg, Andy,  “Why the Security of USB Is Fundamentally Broken”, Wired, 8-01-2014, http://www.wired.com/2014/07/usb-security/

Advertisements
Tagged , , , , , , , , , , ,

An Appeal to Reason

H21574O_06.02.10_031I would like to take a break from discussing technology this week to discuss the current issue facing my school.  Within the last two weeks, James Madison University has come under a lot of criticism with how it dealt with a former student’s sexual assault case.  Sarah Butters, the former student in question, was allegedly a victim of sexual assault on a spring break trip in Panama City, Florida, in 2013.  As told by her to The Huffington Post, three of her supposedly close male friends took her top off, pulled her onto their laps and attempted to remove her bikini bottoms while filming the entire thing.  Additionally, she can be clearly heard on the video saying “no we shouldn’t be doing this”.  The video was eventually either leaked or distributed on purpose around the campus; I had never heard of this video until recently, so I don’t know how many people have seen it.

Eventually, Ms. Butters filed a formal complaint with JMU’s Office of Student Accountability & Restorative Practices or OSARP for short.  The boys were eventually punished with expulsion after graduation.  Ms. Butters appealed and they were expelled, effective immediately.  The young men appealed and the decision was reversed to the original decision.  Ms. Butters, unhappy with this judgment, filed a formal complaint with the Board of Education and James Madison University is now under a federal investigation.

As part of the ongoing investigation, JMU is very limited in what it can say: “Due to legal / privacy requirements, there are limitations to what we can say publicly about a pending matter of this type”.  Given that detail, it is easy to conclude that all the information we have received in this matter is from Ms. Butters, as JMU is very limited in what it can say.  If the only information we have received is from Ms. Butters,  we can surmise that only one side of the story has been presented.  If we only have one side of the story, it follows that we may not have all the facts of the case.  If we don’t have all the facts of the case, then it stands to reason that we cannot form an accurate and informed opinion about the case.

For those of you who seek to make an informed and accurate opinion about this case, I propose that instead of lambasting James Madison University, we simply wait for the investigation to conclude.  Those conducting said investigation will be getting both sides of the story and therefore all the relevant facts.  If they have all the relevant facts, then they can make an informed and accurate judgment.  In America, we hold the idea of “innocent until proven guilty” in very high regard.  Well, JMU has not been proven guilty, only the accuser has had room to talk.  This is akin to only allowing the prosecutor time to talk in a court of law, and then sentencing the defendant without hearing any defense.  That’s what is happening right now.  It goes against the idea of American Justice and it goes against rationality.

I’m appealing to your reason, not your emotions.  let’s not assume that James Madison University acted improperly or illegally until we have all the facts; the University is innocent until proven guilty.

Tagged , , , , , , , , , , , , , , , , ,

Harley Davidson Announces Electric Motorcycle

MotoBikeMain Image

Image © 2014 Harley Davidson

As some of you might not be aware, I am a huge fan of Tesla’s electric car, so naturally I was excited to hear of Harley Davidson’s new Project Livewire.  Unfortunately, just a prototype right now, Harley Davidson intends to tour the United States to give customers a feel for how the bike handles and what they think of it.

Electric cars are a relatively niche market, and electric motorcycles are almost nonexistent.  For Harley Davidson to announce this is huge because they have a long history of being wildly popular.  Other motorcycle companies are bound to latch onto this idea to continue to compete, and consequently, electric vehicles will have increased attention in the public eye, which could be bad or good.

I hope this increased attention will convince more and more manufacturers of the electric car’s advantages and consequently bring down the cost of purchasing electric cars as more companies begin to make them.

It’s exciting to see such a big player entering the field; hopefully it’s just a matter of time before the rest of the team follows.

Tagged , , , , ,

Scientists Successfully Teleport Data via Quantum Computing

Image

“In a paper published this past Thursday, physicists from the Kavli Institute of Nanoscience at the Delft University of Technology in Netherlands reported that they were able to reliably teleport data between two quantum bits separated by about 10 feet” (Markoff).  These scientists were able to send data from place to another without the moving the actual physical matter that the bits are attached too.

The normal bits of information that we are familiar with are either 1 or 0, while quantum bit or qubits as they are called, can simultaneously take on lots of different values.  This attribute is the key that holds the potential to unlocking a new generation of computing and theoretically completely secure communication.

As a proponent of quantum computing, I am very excited to read this news, as one of the major flaws associated with quantum computing is the question of realistic attainability.  Sure, these things can be produced in the lab, but can they scaled up to an entire functioning computer?  That’s the real question that people, including businesses’, are interested in before they invest in this technology.  Hopefully, this breakthrough will bring some more money into the research of quantum computing.

The primary objective of this research is to create a functional quantum computer that could entangle and maintain a large number of qubits for a long period of time; this has not been achieved as of yet.  If this were achieved, that computer would be able to solve certain classes of problems much faster than even the fastest computers in use today.

 

 

source:  Markoff, John, “Scientists Report Finding Reliable Way to Teleport Data.” The New York Times, 29th, May 2014. Web. 30 May 2014.

Tagged , , , , , ,

I’m Back!

Hello and good to be back everyone.  I was out for three weeks because my Junior year of College was at an end and I needed to focus on Finals and moving out.  Then, I wanted to take a little break to relax before I started my Summer job, so here I am!

There’s been quite the discussion regarding Net Neutrality since when I last posted and I think it’s crucial to go into some more detail on this topic.

The basic idea behind Net Neutrality is that all internet traffic should be treated equally.  This means that whether you’re accessing your e-mail or watching a video on YouTube, it should all be treated the same.  However, the FCC has created a proposal that leaves the door open for Internet Service Providers (ISPs) to implement a two-tiered internet where content creators like YouTube or Facebook are essentially forced to pay the ISPs money to grant their users faster access to their services.  This is the idea of “paid prioritization” that many people, including myself, are opposed to because it contrasts the idea of Net Neutrality.  This “paid prioritization” will create an internet “fast-lane” where websites that can pay for the premium speed will be in the fast-lane, and smaller websites will be stuck in the slower-lane and thus receive less attention.  What this means for you is a possible increase in fees for certain services or new fees springing up where there are none at the moment.  Currently, the FCC’s proposal is a notice of proposed rule-making, NPRM for short, and they are asking the public to weigh-in and comment on their proposal.  You can submit a comment on their website, http://apps.fcc.gov/ecfs/, under “Submit a Filing”.

I think the entire idea of “paid prioritization” could really stifle innovation because the cost of entry for internet start-ups would be much more expensive, e.g. the next Twitter or Facebook would have trouble competing with the current ones because of cost, not because of ineffective content.  I am nervous to see where this proposal will go, I just hope its for the good of the consumer instead of the ISPs.

Tagged , , , , , , , ,

No Post This Week

Finals Week.

Should We Completely Encrypt the Internet?

Image

Original Illustration: Getty Images

Most websites we visit implement some form of encryption to secure passwords, bank information, or other account information.  This is done via SSL or TLS, which stand for Secure Sockets Layer  and Transport Layer Security, respectively.  Both of these are cryptographic protocols, which allow your computer to securely communicate with your bank or other services.

There has been a recent call by many security experts to enforce this type of encryption across every type of communication between devices on the Internet.  This would mean that everything from ordering food online to checking the news would be encrypted.

I think this is a really interesting idea because it’s a clever approach to the issue of security on the Internet. Encrypting everything would lead to more security, but there are real costs associated with doing this.  First of all, there’s the very classical trade-off of performance vs safety. In many programming languages, there is an emphasis on speed (such as C/C++), safety (such as the ADA language, or something in between those two (like Java).  Enforcing safety takes time because this is not how a computer thinks naturally.

You can extrapolate this to HTTP and HTTPS, the non-encrypted and encrypted protocols, respectively, that are the foundation for sending data across the Internet. If you add an additional layer of security, it’s going to slow down data transfer because there’s another task the protocol must complete. It’s hard to say what that speed decrease is however, and if it would adversely affect website load time or performance.

Another issue is that smaller websites would need to actually purchase TLS certificates from vendors, which can vary a lot in cost.

I think I need more data to determine what the exact performance overhead associated with complete encryption is before I make a decision for either side. Conceptually however, I think this is a step in the right direction for the Internet, where security has taken a backseat for a long time.

 

 

 

source: Flinley, Klint. “It’s Time to Encrypt the Entire Internet.” Wired.com. Conde Nast Publications Inc., April 17th, 2014.  Web. April 18th, 2014.  http://www.wired.com/2014/04/https/

Tagged , , , , , , , ,

The Heartbleed Bug

Image What is Heartbleed?

A new bug in the popular OpenSSL cryptographic software library has been found and it is incredibly terrifying.  OpenSSL is a popular open-source tool that a large majority of the Internet uses to encrypt data.  OpenSSL is an incredibly critical tool used by websites to authenticate data, like making sure it’s really your computer that’s communicating with your bank right now and not someone else’s.  This new vulnerability could lead to some potentially far-reaching exploits.  Here is a link to http://heartbleed.com/, a website created by Codenomicon to give you the more specific details regarding the bug.

Am I vulnerable?

The bug exploits servers for information; it doesn’t have a way of attacking an individual in the sense that it’s on your computer and actively gathering information from you.  It’s really difficult to tell whose affected and whose not at this point because the bug leaves no trace that you were attacked.  Several prominent sites such as Yahoo were shown to be vulnerable earlier this week, though I think at the time of this writing those sites have been patched.  Either way, it would be a good idea to change your passwords on sites you know are no longer vulnerable.  If a site is still vulnerable, there is nothing you can do as the encryption keys may be compromised.  So any password you change, the attacker can still easily decrypt.

Who’s responsible?

It is almost impossible to pin this bug on any one individual because OpenSSL is an open source project, which means that anyone can work on it.  Most of the people involved are very competent developers and even though it is open to everyone to contribute, every suggested change to the source code is reviewed and then approved or denied.  The weirdest thing about this vulnerability is how out in the open it was.  It wasn’t hidden or intended to never be found.  It was just sitting there, staring us right in the face.  It probably is the result of accidental sloppy programming as opposed to a malicious entity.  It’s also a very good reason why C can be such a dangerous and efficient language.  There isn’t a lot of things C won’t let you do.  Imagine the most literal person you can think of, and then multiply it by two and that’s the C programming language.  It does exactly what you tell it to, it doesn’t think about the safety or anything like that because that would slow it down.

How it works

Heartbleed is an interesting vulnerability in how it works because it is for the most part, fairly simple.  Essentially, an attacker can ask the server to give him a random 64 Kilobytes of memory.  The attacker doesn’t need to know what’s in those 64 Kilobytes, but he can continuously ask for this information until he can start putting together the important pieces, such as the encryption keys.  If the attacker successfully steals the encryption keys of a server, it is very bad news for that server because the attacker can now decrypt past and future communications with that server and even impersonate the service at will.  In case that doesn’t seem like a big deal, trust me when I say it is very very big deal.

The entire issue is centered around a function in the C programming language called memcpy, which stands for memory copy.  This function has 3 things that it takes as input: the destination, the source, and the size. Basically, memcpy is a function that takes a block of memory size big from source and moves it to destination.  The reason this is tricky is because in OpenSSL there was nothing verifying that what was in source was actually size big.  What this means is that someone can tell a server to open a block of memory size big but have nothing in source.  The newly created block still contains data from other services in the server but it has been prepared to be overwritten so it’s no longer needed at this very moment in time.  This data could be anything from random computer gibberish, to passwords, usernames and encryption keys.

Lets make this simpler to understand with the following scenario:  Lets say there’s a new store in town that trades in old boxes for new ones for some reason.  Not that lucrative of a business but a business nonetheless.   Now the owner of the store is a great guy, but he can’t count.  Like at all.  So you go to him with a box that has your shoes in it and you say “I want a new box that I can fit these shoes in”  and you give him your box.  He goes and takes out a box the same size as yours that’s filled with stuff that belong to other people. Before he can put your stuff in he has to take their’s out.  However, the owner only takes out just enough out to fit your first shoe, and then does the same thing for your other shoe.  Luckily, the box is the exact size and all the other people’s stuff is removed.  You take the box and you’re on your way.  This is how memcpy works normally.  Now lets say you’re a big jerk and you like to take advantage of the owner’s inability to count.  So you go down there and bring a big box filled with a pair of sandals and tell the owner you have a lot of stuff in this box.  So the owner takes out a box the same size as yours and starts replacing the items until your sandals fit, which isn’t a whole lot.  The box is still almost completely full with someone else’s stuff!   Now he sends you off, none the wiser, very pleased with himself because two sales in one day is quite an event for this man.  When I said he couldn’t count, I meant it; he has no idea how much stuff he put in there.  As far as he’s concerned, he went and got the new box, and set it on the table, and put something in there.  Even the putting something in there is a little foggy for him.  The other people’s stuff in the box is the data that was ready to be overwritten, the boxes were the blocks of memory, and the owner was the memcpy function.  The owner has no idea what stuff is in the box other than that no one needed it anymore.  That’s basically how the bug works, and it’s a form of a buffer overflow attack.

Why should I care?

This bug is terrifying is because of how many servers are vulnerable and how hard it is to detect.  This will require a lot of due diligence on the part of server administrators to ensure they’re on the newest version of OpenSSL.   Codenomicon estimates that approximately two-thirds of Internet sites uses OpenSSL for encryption, and who knows how many of those are vulnerable or not.  That’s millions of websites that could be susceptible to this bug.  I will repeat myself to reemphasize this point: change your password on sites that are no longer vulnerable.  If you’re unsure whether a site is vulnerable or not, try doing some digging on the site’s news updates or look if they have a list of updates and hot fixes.  If a site doesn’t tell you if it was vulnerable or not then I would caution further use of that site.  They might not be actively updating it, may be unaware of the bug or they don’t want you to know, which is worse in my opinion.

Let’s just hope that this hasn’t been in the wild for too long, otherwise the Internet really will bleed.

 

 

 

sources: http://heartbleed.com/

Tagged , , , , , , , , , , ,