Archive for January, 2011
In the year 2010, there was an apparent growth in hacking and computer crimes according to security reports by the RSA and the cyber crime department of the Department of Justice and the FBI. Some of the worst threats come from phishing scams, which may lead to identity theft and other online fraud.
All of these threats are coming from several sources worldwide. What kind of hacker statistics are there to show the rise of According to a report released by the RSA, there was a 7% increase in the amount of phishing attacks worldwide between the months of July and August 2010. The United States currently leads as the country that suffered the most attacks in regards to online cyber threats with 35% of these aimed at citizens of the US; the US was also the country that hosted the most attacks, with 60% of phishing attacks starting from the US.
The recent November report that was released by the RSA noted that there was a slight decrease in cyber attacks between August 2010 and September 2010, with total attacks decreasing by one percent. The reasoning is that there haven’t been as many noticeable attacks against larger corporations, such as banking and financial institutions. This doesn’t necessarily translate to greater online safety.
A recent PBS special revealed that the Pentagon receives over six million hacking and security threats a day and that some of departments, such as the Department of Energy, also suffer from vulnerabilities with their online security.
Another report, this one released by Verizon’s Business 2010 Data Breach Investigations for the previous year of 2009, notes that hacking attempts and malware held the number 2 and 3 spots in regards to data breaches that companies experienced (the number 1 cause was the abuse of priviledges). About 40% of data breaches happened due to hacking, while 38% were the results of malware.
What the Statistics State
Hackers are becoming dilligent at trying to find ways to break into computer systems. The November 2010 report from the RSA goes over the new Zeus 2.1 trojan, which can in essence mask its signature in order to avoid being removed by anitvirus or other programs that protect and secure a computer system.
While many strides and advances have been made to make sure that our online privacy is secure, the above statistics outline that we still have a ways to go in order to ensure that all personal and private data can not and will not be compromised. Keeping operating systems, antivirus and antispyware programs up-to-date are one key in keeping information safe. Other ways include making your password hard to guess and being aware of suspicious links or offers that may come to you via the Internet.
Source: RSA Anti-Fraud Command Center, RSA Online Fraud Report, November, 2010
Image content @ Stock.Xchng
Trusted Sites in Internet Explorer Using Group Policy
One of the Internet Explorer security zones that most system administrators manage is the Trusted sites. Any IP address or website that they trust in their network or organization is placed in the Trusted sites. Managing the list of trusted IP and URL addresses is easy using the Internet Options in IE, but administrators may not let you change the settings if they have to control the security zones in IE using a group policy object. The image at the left shows an example of security zones managed by a system administrator which means the end-user may not be able to add or remove trusted websites or IP addresses.
Managing Trusted Sites Using Group Policy Object
A non-home edition of Windows allows system administrators to access the group policy editor in Windows (type gpedit.msc in the run command to open the group policy editor console). The group policy editor includes a group policy object (GPO) to manage many settings in Windows, including the Internet Explorer component. To manage the trusted sites Internet Explorer group policy, navigate to the following:
- For all users in the network: Local Computer Policy > Computer configuration > Administrative Templates > Windows Components > Internet Explorer > Internet Control Panel > Security Page. In the details page, double-click the “Site to Zone Assignment List“.�
- For the current user only of a single machine: Local Computer Policy > User configuration > Administrative Templates > Windows Components > Internet Explorer > Internet Control Panel > Security Page. In the details page, double-click the “Site to Zone Assignment List“.
If the system administrator decides to use both or only one, any websites that they will add is automatically used by Internet Explorer. To add the trusted sites in one of the GPO for IE, select “Enabled” and then click “Show” button. Start entering the websites or IP address that IE will recognize as trusted sites. Note that you should not add http:// if you plan to trust all pages and protocols used by a website. An example is when you want to trust the Bright Hub website, simply enter www.brighthub.com so that all pages and protocols are trusted such as http://, https://, ftp://. Always enter the “2” value which means it is for the trusted sites security zones of Internet Explorer:
In the above example, you will see it is automatically applied in IE’s security zones for trusted sites and there is no way for the end-users to add or remove the list because it is managed by a system administrator using the Group Policy object editor in Windows:
Third-Party Tool to Manage Trusted Sites Internet Explorer Group Policy Object
If you often modify or manage the security zones in Internet Explorer by adding or removing a URL or IP address for the trusted, restricted or Intranet sites zone, you should consider using the free software, ZonedOut. With ZonedOut, you can import a list of URL or IP address from a notepad instead of entering one by one any URL or IP address. ZonedOut lets you add the trusted sites in any user hive: User or Local machine hives, which means for the current user or all users of a computer.
Image credit: Screenshot taken by the author.
The PlayBook is a lot smaller than it looks in pictures, and certainly smaller than its primary 7-inch tablet competition, the Galaxy Tab. The matte black is flat, and the device looks very “serious” — sort of the ThinkPad of tablets, if that makes any sense. There’s a camera in the center and a classy embossed BlackBerry logo and that’s it. On top there are very small but easily pushed buttons (the device’s only physical controls): volume, play / pause, and a power / lock button. Along the bottom edge are mini-HDMI, micro USB and charging plugs. The front is free of visible controls, but along the edge of the screen are actually capacitive sensors that let you swipe in from the edge — much like on Palm’s Pre. Right above the screen, centered, is a second camera.
The screen itself is wonderful. It’s very bright, colors are pitch-perfect, and the viewing angle is all you could ask for (iPad-level or better). The pixel density is great at 1024 x 600, which generates the same 170 ppi of the Galaxy Tab, versus the iPad’s 132 pip. Touch responsiveness is mostly great, though we had a bit of trouble with some smaller controls at times, which could possibly be a software fault at this stage.
We were in an admittedly dark room, but the 5 megapixel rear camera offered up some pretty horribly grainy images — hopefully it can manage something better under nicer conditions.
A base model will come with only WiFi for connectivity, though there will also be a WiMAX version out on Sprint in the summer. You can also tether with a BlackBerry smartphone over Bluetooth if you’ve got a hankering to BBM and only 7-inches of keyboard will satisfy. Under the hood is an (unnamed) dual-core 1GHz ARM Cortex-A9 processor.
The operating system has a lot of similarities to webOS. Some might say too many, but we’re not complaining. The primary innovation is extending the webOS’s famed swipe-up motion to all four sides, with a swipe from the top bringing up technical details like battery life, wireless status, and a shortcut to settings; a swipe from the bottom bringing up the main screen with a series of “cards” (our / Palm’s word, not RIM’s) in the center, showing that same tech detail bar across the top, and access to different categories of apps along the bottom; and swipes from the sides putting the PlayBook into a full screen cards UI for simple switching between applications.
While the cards might not be RIM’s most original innovation, they demonstrate the technical prowess of QNX quite well, since they can be set to stay live while you swipe through cards and even when you enter into other applications. Check out our video for a full-on stress test, but basically we weren’t able to hit the limit of the machine despite running a 1080p video, a realtime game of Quake III, a song in the music player, and a photo slideshow (though things did get a little laggy near the end, naturally). Luckily, you can configure multitasking to your taste, including pausing apps when you’re viewing cards, pausing apps only when you enter into another app, and the “all singing, all dancing” mode of everything live all the time.
When you’re in the main screen a swipe up on the apps drawer puts you into full-screen apps browser, which is mainly populated by Adobe Air apps right now. We weren’t able to see any of the BlackBerry applications (like BBM, Calendar, and Messages), since they require you to be tethered over Bluetooth to your BlackBerry phone, and RIM wasn’t ready for that. Maybe Mike L. didn’t want us to see his personal email for some reason.
BlackBerry’s own music app looked nice, though we didn’t see much functionality to it. The Documents To Go Sheets To Go app seemed pretty full featured, though UI-wise it seems pretty simplistic — more like a web app than an app app. There’s an official Adobe PDF reader, which was pretty speedy, but another yawner in the UI department. Everything looks nice, don’t get us wrong, we just don’t see any innovation in how you’re actually using these apps with all these tablet screen real estate.
The nicest “app” we saw was the browser, which reveals a tab “drawer” of sorts when you swipe down from the top. Performance-wise it was pretty much flawless, including smooth pinch to zoom and scrolling.
The touchscreen keyboard is very nicely laid out and super responsive, with possibly the best fake key tapping sound of all time. Unfortunately there’s no word prediction or auto correction right now, but hopefully RIM will port in some of that functionality from its phone OS before launch.
If you can’t tell, we’re very impressed. Overall the device is blazingly fast, comfortable to hold, and intuitive to use. Our largest concern is the apps: RIM has a bead on easily ported Adobe Air apps, but as Android has so ably demonstrated, the native SDK is very important for gaming, and we don’t know much about RIM’s skills there just yet — though we were assured that QNX is a very easy platform to develop for, one of the reasons RIM itself has been able to turn around this project in such a short time, with the promise of rapid updates in the future as well. If RIM can find an aggressive price, a nice market lull to launch in (before the Xoom and perhaps an iPad 2 launch and grab all the limelight), and smattering of fairy dust, we could be looking at a pretty successful product here.
Exclusive: The future of the iPad 2, iPhone 5, and Apple TV, and why Apple is shifting its mobile line to Qualcomm chipsets
We’ve been hearing a ton of rumors about what direction Apple’s next set of products will take and when they’ll be available — but now we’ve got some concrete information from reliable sources which should make the path a little clearer. And that includes info on the next iPad, the iPhone 5, the second iteration of the new Apple TV, and a big change coming for all of the company’s mobile products. Want to know the scoop? Read along after the break to get the goods.
Right now, everyone is obviously buzzing about the Verizon iPhone 4. What people aren’t talking about (yet) is the fact the device will be using a Qualcomm chipset for its CDMA radio (with no GSM capabilities) as opposed to the Infineon versions seen in the GSM iPhone 4. This isn’t much of a surprise by itself, but it paves the way for a major shift from Apple. But first, a little bit of a timeline.
Just before the Verizon iPhone 4 launch, we’d heard from multiple sources — sources like the ones which gave us all that extremely accurate Apple TV info last time around — that the iPad 2 isn’t nearly as close to launch as some have speculated. Apparently, those case and hardware mockups we’ve seen are rather early versions, which means we’re still months out from a proper introduction. It seems likely that the device will land around April (perfectly timed with the 12-month product cycle Apple enjoys). And what about that device? From what we’ve been told, the thinner, sleeker tablet will sport a new screen technology that is akin to (though not the same as) the iPhone 4’s Retina Display and will be “super high resolution” (unlike reports to the contrary). The device will remain at 10 inches but will now feature both front and rear cameras (not a huge surprise), and… there’s an SD slot. That’s right — our sources say with near certainty that the device will have a dedicated SD slot built in (with no traditional USB slot). In fact, see that weird notch in the photo below? That’s where the SD part will be located. What’s most interesting, however, is what’s happening under the hood.
The new iPad will feature a dual GSM / CDMA chipset produced by Qualcomm and will mark Apple’s shift away from Infineon as its chipset maker to Qualcomm for all of its mobile devices. It’s not clear if the chipset being used will be based on the company’s EV-DO / HSPA Gobi variety or an entirely new design. Presumably, the strength of the new dual-mode chipset is that it will allow both Verizon and AT&T to offer the iPad simultaneously.
But all of these moves are leading up to the iPhone 5 — a completely redesigned handset — which our sources say is on track for a summer launch. Right now, the device is being tested discreetly by senior staff at Apple (strictly on campus only). We don’t have much info on the phone at this point, but our understanding is that the new device will be a total rethink from a design standpoint and will be running atop Apple’s new A5 CPU (a Cortex A9-based, multi-core chip). This device, like the iPad 2, will feature a Qualcomm chipset that does triple duty as the CDMA / GSM / UMTS baseband processor — from what we hear there’s no LTE in the mix at this point.
One other interesting tidbit: Apple is at work on the second generation of its redesigned Apple TV, which will include that new A5 processor. The CPU is said to be blazingly fast, cranking out 1080p video “like running water.” It’s likely that the A5 will make it into the iPad 2 as well, but we have yet to confirm that.
So what does this all mean? Besides the surprise of an SD slot on the iPad, it all sounds fairly routine. It’s the complete move away from Infineon to Qualcomm that’s truly notable — marking one of the biggest shifts in suppliers and technology since the advent of the original iPhone. We’re working on getting more detail on all of these devices, and as soon as we do, you guys will be the first to know!
Can AT&T Forestall Large-Scale Defections?
A new ChangeWave survey of 4,050 consumers, completed just days before the announcement, focused on the impact of a Verizon iPhone on the major U.S. Wireless Service Providers.
An Industry in Upheaval
First, we asked survey respondents how likely they were to change their wireless service provider in the next 90 days.
A total of 10% said they plan on switching providers – 2-pts higher than a previous ChangeWave survey in September and the highest churn level of the past 18 months.
Importantly, when we compared the churn rates for the top wireless providers, we found major differences.
Only 4% of Verizon’s customers plan to switch in the next 90 days. In comparison, 10% of Sprint/Nextel’s customers say they plan to switch, as do 15% of both T-Mobile’s and AT&T’s.
As the following chart shows, AT&T’s churn rate is its worst ever in a ChangeWave survey.
What’s behind the weakening loyalty of AT&T customers?
First, better than two-in-five likely switchers from AT&T cite Poor Reception/Coverage (42%) as their top reason for leaving, followed by Dropped Calls (27%).
Secondly, the weakening loyalty of AT&T wireless customers is directly attributable to the upcoming release of a Verizon iPhone.
To gauge how many AT&T customers are going to switch to Verizon when they begin offering the iPhone, we asked:
For those who currently use AT&T as your wireless service provider, do you plan to switch to Verizon if-and-when they begin offering the iPhone*
*Note: This survey was conducted in late December prior to the Verizon iPhone announcement.
A total of 16% of AT&T subscribers say they’ll switch to Verizon once it begins offering the iPhone. Another 23% say they don’t know if they’ll switch.
Importantly, current Apple iPhone owners are the most likely group of all to switch, with more than one-in-four (26%) saying they’ll leave AT&T for Verizon.
Note that among all AT&T subscribers planning to switch, two-in-five (41%) say they’ll do it within the first three months of the iPhone’s release – and another 31% within the first year.
A Bright Spot for AT&T
Despite the seemingly inevitable hemorrhaging of AT&T’s subscriber base, there was one bright spot for the wireless giant – a significant improvement in its dropped call rating.
Here’s a look at AT&T’s dropped call ratings vs. Verizon’s in ChangeWave surveys dating back to September 2008:
As the chart shows, while AT&T continues to struggle in this very important area and trails Verizon by a wide margin, it has made significant advances since our previous survey – improving from its all-time worst 6.0% rating last September to 4.7% in the current survey.
The findings suggest AT&T is now taking concrete steps to try to improve long-standing service issues. But can it do so quickly enough to forestall large-scale defections to Verizon?
Not according to our ChangeWave survey results. The Verizon iPhone is causing a major transformational shift in the wireless industry, and for now the momentum clearly favors Verizon.
To review the complete ChangeWave survey results, including the heated competition for future market share among Verizon, AT&T, Sprint/Nextel, and T-Mobile, follow the link below:
The Internet’s IPv4 clock keeps ticking down. As Robert Cannon, the FCC’s senior counsel for Internet law, observed recently, “The original [Internet] address space, IPv4, is nearly exhausted.” He’s so right.
Still, I’ll bet most of you are still scared to death of having to learn IPv6, never mind actually deploying it. I know I would be if I were an overworked network administrator. Fortunately, there is help.
The National Institute of Standards and Technology (NIST) has just released Guidelines for the Secure Deployment of IPv6 (PDF Link). This is an excellent and free 188 page guide to IPv6. Besides covering the basics, it also does an excellent job of covering IPv6 security issues and how to deploy and management dual IPv4/IPv6 networks. Frankly, it’s the best guide I’ve seen to date on how to actually put IPv6 to work on a network.
Color me green with envy, I’d planned on writing my own e-book on IPv6 sometime this year, and now I have a very high standard to shoot for. This isn’t just a network administrator’s manual, it’s also, to quote NIST’s Evelyn Brown, “a guide for managers, network engineers, transition teams and others to help them deploy the next generation Internet Protocol (IPv6) securely.”
That last word, “securely” is an important one and it’s another reason I highly recommend that you download a copy of this NIST document. As lead author Sheila Frankel said, “Security will be a challenge, however, because organizations will be running two protocols and that increases complexity, which in turn increases security challenges.” These challenges will “include fending off attackers that have more experience than an organization in the early stages of IPv6 deployment and the difficulty of detecting unknown or unauthorized IPv6 assets on existing IPv4 production networks.”
I know. Just what you needed: deploying a new network stack and a new set of network security problems. That’s why I can’t recommend enough that anyone getting ready to deal with IPv6, read this document. You’ll be glad you did.
Networks need buffers to function well. Think of a network as a road system where everyone drives at the maximum speed. When the road gets full, there are only two choices: crash into other cars, or get off the road and wait until things get better. The former isn’t as disastrous on a network as it would be in real life: losing packets in the middle of a communication session isn’t a big deal. (Losing them at the beginning or the end of a session can lead to some user-visible delays.) But making a packet wait for a short time is usually better than “dropping” it and having to wait for a retransmission.
For this reason, routers—but also switches and even cable or ADSL modems—have buffers that cause packets that can’t be transmitted immediately to be kept for a short time. Network traffic is inherently bursty, so buffers are necessary to smooth out the flow of traffic—without any buffering, it wouldn’t be possible to use the available bandwidth fully. Network stacks and/or device drivers also use some buffering, so the software can generate multiple packets at once, which are then transmitted one at a time by the network hardware. Incoming packets are also buffered until the CPU has time to look at them.
So far, so good. But there’s another type of buffering in the network, used in protocols such as TCP. For instance, it takes about 150 milliseconds for a packet to travel from Europe to the US west coast and back. My ADSL line can handle about a megabyte per second, which means that at any given time, 150K of data is in transit when transferring data between, say, Madrid and Los Angeles. The sending TCP needs to buffer the data that is in transit in case some of it gets lost and must be retransmitted, and the receiving TCP must have enough buffer space to receive all the data that’s in transit even if the application doesn’t get around to reading any of it.
In the old days (which mostly live on in Windows XP), the TCP buffers were limited to 64K, but more modern OSes can support pretty large TCP buffers. Some of them, like Mac OS X 10.5 and later, even try to automatically size their TCP buffers to accommodate the time it takes for packets to flow through the network. So when I send data from Madrid to Los Angeles, my buffer might be 150K at home, but at the university, my network connection is ten times faster so the buffer can grow as large as 1.5MB.
The trouble starts when the buffers in the network start to fill up. Suppose there’s a 64-packet buffer on the network card—although it would be hard to fill it entirely—and another 64 packets are buffered by the router. With 1500-byte Ethernet packets, that’s 192K of data being buffered. So TCP simply increases its buffer by 192K, assuming that the big quake happened and LA is now a bit further away than it used to be.
The waiting is the hardest part
Of course with all the router buffers filled up with packets from a single session, there’s no longer much room to accommodate the bursts that the router buffers were designed to smooth out, so more packets get lost. To add insult to injury, all this waiting in buffers can take a noticeable amount of time, especially on relatively low bandwidth networks.
I personally got bitten by this when I was visiting a university in the UK where there was an open WiFi network for visitors. This WiFi network was hooked up to a fairly pathetic 128kbps ADSL line. This worked OK as long as I did some light Web browsing, but as soon as I started downloading a file, my browser became completely unworkable: every click took 10 seconds to register. It turned out that the ADSL router had a buffer that accommodated some 80 packets, so 10 seconds worth of packets belonging to my download would be occupying the buffers at any given time. Web packets had to join the conga line at the end and were delayed by 10 seconds. Not good.
Cringely got wind of the problem through the blog of Bell Labs’ Jim Gettys, which reads like a cross between a detective novel and an exercise in higher Linuxery. Gettys suggests some experiments to do at home to observe the issue (“your home network can’t walk and chew gum at the same time”), which seems to be exacerbated by the Linux network stack. He gets delays of up to 200ms when transferring data locally over 100Mbps. I tried this experiment, but my network between two Macs, using a 100Mbps wired connection through an Airport Extreme base station, was only slowed down by 6ms (Mac OS X 10.5 to 10.6) or 12ms (10.6 to 10.5).
Cringely gets many of the details wrong. To name a few: he posits that modems and routers pre-fetch and buffer data in case it’s needed later. Those simple devices—including the big routers in the core of the Internet—simply aren’t smart enough to do any of that. They just buffer data that flows through them for a fraction of a second to reduce the burstiness of network traffic and then immediately forget about it. Having more devices, each with their own buffers, doesn’t make the problem worse: there will be one network link that’s the bottleneck and fills up, and packets will be buffered there. The other links will run below capacity so the packets drain from those buffers faster than they arrive.
He mentions that TCP congestion control—not flow control, that’s something else—requires dropped packets to function, but that’s not entirely true. TCP’s transmission speed can be limited by the send and/or receive buffers and the round-trip time, or it can slow down because packets get lost. Both excessive buffering and excessive packet loss are unpleasant, so it’s good to find some middle ground.
Unfortunately, it looks like the router vendors and the network stack makers got into something of an arms race, pushing up buffer space at both ends. Or maybe, as Gettys suggests, it’s just that memory is so cheap these days. The network stacks need large buffers for sessions to high-bandwidth, far-away destinations. (I really like being able to transfer files from Amsterdam to Madrid at 7Mbps!) So it’s mostly up to the (home) router vendors to show restraint, and limit the amount of buffering they put in their products. Ideally, they should also use a good active queuing mechanism that avoids most of these problems either way.
Cringely may have a point when he suggests that ISPs are in no big hurry to solve this, because having a high-latency open Internet just means that their own VoIP and video services, which usually operate under a separate buffering regime, look that much better. But the IETF LEDBAT working group is looking at ways to avoid having background file transfers get in the way of interactive traffic, which includes avoiding filling up all those router buffers. This may also provide relief in the future.