Wednesday, September 7, 2011

Security » Scotland Yard Tightens the Pincers on Anonymous

Posted by echa 12:56 AM, under | No comments

Security » Scotland Yard Tightens the Pincers on Anonymous Are law enforcement securities making headway against hacktivist groups like Anonymous and LulzSec? It's possible -- last week Scotland Yard nabbed two people suspected of launching attacks under the moniker "Kayla." That's a name synonymous with the notorious attack on HBGary earlier this year.

It's been another wild and crazy week for the security community.

Scotland Yard arrested two suspected members of Anonymous and LulzSec Thursday.

Meanwhile, the major players in the browser market -- Google (Nasdaq: GOOG), Microsoft (Nasdaq: MSFT) and the Mozilla Foundation -- have chopped Dutch certificate DigiNotar off at the knees, apparently because it was slow to warn that hackers had broken into its network and issued rogue SSL security certificates.

Further, a security researcher released information that hackers could use to leverage Google's massive bandwidth and launch large-scale distributed denial of service (DDoS) and SQL injection attacks.

The Star Wars Galaxies gaming site was also hacked this past week, and the hacker posted the user IDs and passwords of 23,000 of the site's members on the Web.

Finally, a survey by security vendor Veriphyr has found that healthcare organizations are suffering data breaches hand over fist.

Ho, Hackers! The Game's Afoot!

Scotland Yard arrested two suspects in separate counties Thursday, reportedly under suspicion of conducting online attacks under the handle "Kayla."

"Kayla" was allegedly among those behind the February Anonymous intrusions perpetrated on HBGary Federal, a company claiming to provide security to the United States federal government.

The attackers defaced HBGary's website, stole and published 71,000 internal emails from the company, and posted a message denouncing the HBGary.

Lack of Speed Kills

On Monday, Google learned that some users of its encrypted services in Iran suffered attempts at man-in-the-middle attacks, where someone tries to intercept communications between two parties.

The attacker used a fake SSL certificate issued by Dutch root certificate authority DigiNotar.

It seems an intruder had broken into DigiNotar's systems back in July and stolen up to 200 rogue, or fraudulent, SSL certificates, some for major domains.

DigiNotar had known about the breach since July 19 but apparently had not disclosed the information.

In response, Google, Mozilla and Microsoft all revoked trust in the DigiNotar root certificate in their browsers.

"These certificates could be used as part of attacks designed to harvest user Gmail credentials and gain access to sensitive data," Norman Sadeh, cofounder of Wombat Security Technologies, told TechNewsWorld.

Disabling DigiNotar's root certificate authority was justified because "security across the Internet is a shared responsibility and our root certificate authorities must be held to the highest standard," Don DeBolt, director of threat research at Total Defense, told TechNewsWorld.

Google spokesperson Chris Gaither declined comment.

Leveraging Google's Bandwidth for Hacks

A security researcher has disclosed on the IHTeam blog how attackers can use Google's servers to launch a DDoS attack.

Hackers can also use the technique to launch SQL injection attacks, one of the top 10 vectors of attack, according to the tester, who goes by the handle "r00t.ati."

The tester posted the information Monday after Google's security center had failed to respond to a notification of the threat sent Aug. 10.

Google posted a message on the IHTeam blog Friday apologizing and stating it has tweaked its security.

"This is a serious issue, and even if Google fixes these two vulnerable pages, bad actors will likely comb Google's pages from now on looking for a similar vulnerability," Total Defense's DeBolt remarked.

"My understanding is, this is not a software vulnerability, but rather a description of service misuse that we have not seen in practice," Google spokesperson Jay Nancarrow told TechNewsWorld.

Multiple social networking and online translator sites could also be used by hackers to launch attacks in the same way, Nancarrow pointed out.

The Force Isn't Strong With This One

This past week, a hacker broke into the Star Wars Galaxies gaming site, stole the user IDs and passwords of 23,000 members, and posted them on the Internet.

All the passwords are in plain text, the hacker said.

SWGalaxies isn't the only gaming site to have been victimized in recent months. Earlier this year, the Sega website and the Sony (NYSE: SNE) PlayStation Network were hacked, with data on more than 100 million users stolen in each case.

Are game sites more vulnerable than others? Not necessarily, but they often aren't as heavily fortified as, say, banking sites. That needs to change, Todd Feinman, CEO of Identity Finder, told TechNewsWorld.

"Any institution that stores personal information, including a password, should be held to a higher standard and be accountable for loss of sensitive data," Feinman stated.

Healthcare and Privacy

More than 70 percent of respondents to an online survey on privacy breaches concerning protected health information have suffered at least one breach in the past 12 months, according to a study conducted by security vendor Veriphyr.

Hospitals and health systems constituted 52 percent of the 90 respondents, Veriphyr CEO Alan Norquist told TechNewsWorld. Half the responding organizations had more than 1,000 employees.

The two leading types of breaches "involve legitimate insiders misusing their legitimate access to patient data by accessing the records for reasons other than healthcare," Norquist said.

Mobile Tech » Speedtest Won't Fix Your Poky Connection, but It Sure Is Nice to Know

Posted by echa 12:33 AM, under | No comments

Mobile Tech » Speedtest Won't Fix Your Poky Connection, but It Sure Is Nice to Know When the data connection on your iPhone hits the skids, it's not always apparent why. It would be great to have an app that would automatically fix that. SpeedTest.net's app won't do that, but it will give you exact figures regarding how slow or fast your connection is at any given time. That info can be used to find the best spots for wireless connections and possibly deal with your ISP.

Speedtest.net Mobile Speed Test, an app from Ookla, is available for free at the App Store.

For the most part, I barely notice the incoming speed of my Internet data connections on my iPhone 4 or iPad 2. Sure, if I want to download something large, I make sure I'm on a WiFi connection. If I'm in a car (riding as a passenger), I'll think twice about attempting to download a bunch of email out of range of an AT&T (NYSE: T) 3G tower.

But sometimes -- usually when I'm streaming a video or really need to get some work done -- it's painfully obvious that the tiny invisible blips of data are not riding the waves very fast at all, for no discernible reason. In fact, I've had poor Netflix (Nasdaq: NFLX) streaming response while using a WiFi connection only to turn off WiFi on my iPhone 4 and stream via AT&T's cellular data service instead -- with much better results.

This used to be sort of trial and error, hit and miss. But now there's an app to help you better understand what sort of Internet data movement performance you can expect: Speedtest.net Mobile Speed Test by Ookla.

This free app works much like the widely and wildly popular desktop browser-based version at Speedtest.net. You start the test, which sends some sort of meaningless download data to your desktop (or in this case, iPhone) while the app measures the speed at which you're able to gobble the data. Then it reverses and uploads a smaller bit of data.

As with most home Internet connections, at least in the U.S., the download speeds are far faster than the upload speeds. I'm not sure where the bottleneck or tech limitations are with this; I just recognize it as a fact of the data plans, most notably seen when a regular consumer is surprised at how long it takes to upload a simple video.

Back to Speedtest.net Mobile Speed Test

Mobile Tech » Speedtest Won't Fix Your Poky Connection, but It Sure Is Nice to Know
The Speedtest.net Mobile Speed Test app uses Ookla's massive global infrastructure to minimize the impact of Internet congestion and latency when it tests your bandwith. I'm not sure what this means, exactly, but I get the impression that Speedtest.net has some brains that decide which servers to connect you to in order to try to get a reasonably accurate measure of your true download/upload speeds.

For example, it wouldn't make a lot of sense to connect you to a small overloaded server in Antarctica that's trying to communicate through a tiny pipe, nor does it make sense to connect you to servers with all sorts of switches and hops in between you and the server. Technically, a blip of data ought to be moving so quickly that thousands of miles mean nothing. But really, what all this means is that you'll likely see the Speedtest.net Mobile Speed Test app connect you to a regional server for your test. The default server chosen in my tests has been from a city about 80 miles away.

In my home, I tend to get my best bandwidth during the morning hours, but as the afternoon wears on, it seems as if my bandwidth falls off a cliff. I'm guessing that every kid in my neighborhood, in the city, in the county, and in the state, et al, either gets home from school and starts playing video games on Xbox Live or starts streaming some kid flick from Netflix. Or maybe it's not the kids, but if I'm thinking about downloading a video to buy on iTunes ... let's just say that I don't usually bother attempting it from 4 p.m. to 8 p.m.

In fact, I've had a roomful of family over during the holidays, and when we all finally agreed on which HD movie to rent on my Apple (Nasdaq: AAPL) TV, we realized that, oops, this puppy will be ready to watch in two hours.

For some people with wicked-fast Internet service plans, this is never an issue. For those of us unwilling to shell out big bucks for high-speed -- or who are located in areas not served with high-speed options -- the Speedtest.net mobile app will give you a quick way to judge your likely bandwidth, even if you're sitting over at your friend's house watching football or thinking about downloading a movie to watch while sitting in an airport waiting for your flight.

The Results

During one test in the wee hours of the morning, my download speed via my home-based DSL service (rated at 3.0 Mbps) delivered 2.21 Mbps to my iPhone 4. Not bad. I turned off WiFi and tried AT&T directly and got a paltry 0.43 Mbps download. Wow. I was shocked at the difference. Obviously, I expect WiFi to usually be faster, particularly when I'm browsing the Apple App Store. But this was a massive difference.

What about uploads? The WiFi delivered 0.55 Mbps in upload speed while AT&T let me push 0.24 Mbps.

What about bars and signal strength? When just using AT&T, I realized that I was in an area of my house that only gave me two bars of signal strength to my iPhone 4. With more bars, might I get a faster response? I moved to a couch where I get four bars and ran the test again, just a few minutes after the first test. The result? Worse. I got 0.27 Mbps on the download and 0.04 Mbps on the upload. I don't doubt that signal strength can influence your upload and download speeds, but I'm guessing that factors beyond your control, like perhaps how the people around you consume data, will have a larger effect your personal bandwidth.

Bottom Line

All in all, the Speedtest.net Mobile Speed Test app won't actually fix any bandwidth problems, but it will alert you to possible issues with your data connections no matter where you go. For this reason, I count it among the pack of utility apps you'll want to have on hand, just in case. As a practical solution, if you need some evidence to use in an argument with an Internet service provider in an effort to get a faster connection or a refund, this data won't technically help you. But from a practical standpoint, companies sometimes respond to customers who seem to have at least some data that backs up their righteous anger.

Or, you might want to have it on hand to help you pick a local coffee shop that's better able to suit your Internet-guzzling needs.

IT Management » Hybridizing the Cloud

Posted by echa 12:25 AM, under | No comments

IT Management » Hybridizing the Cloud Specialized hybrid clouds can often address a particular industry's needs more comfortably than one-size-fits-all services. Sometimes these can turn into sources of new business. For many companies, "maintaining their own infrastructure is not a competitive advantage for them," said NYSE Euronext's Steve Rubinow. "It's really a cost of doing business like telephones and office furniture."

When we hear about cloud, especially public clouds, we often encounter one-size-fits-all services. Advanced adapters of cloud delivery models are now quickly creating more specialized hybrid clouds for certain industries. And they're looking to them as both major sources of new business, and the means to bring much higher IT efficiency to their clients.

The NYSE Euronext recently unveiled one such vertical offering, their Capital Markets Community Platform. We'll see how they built the cloud, which amounts to a Wall Street IT services destination, what it does, and how it's different from other cloud offerings.

Here to tell us about how specialized clouds are changing the IT game in such vertical industries as finance is Steve Rubinow, executive vice president and chief information officer at NYSE Euronext. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Listen to the podcast (21:40 minutes).

Here are some excerpts:

Dana Gardner: I'd like to hear more about how you put your cloud together. You're supporting these services both inside your cloud as well as your clients'. Why have you done it this way?

Steve Rubinow: It's the convergence of a couple of trends and also things that our customer started to tell us. Like a lot of companies, we started to use cloud technology within our own company to service our own internal needs for the reasons that many people do -- lower cost, more flexibility, more rapid spin up, those kinds of things, and we found, of course, that was very useful to us.

At the same time, we've talked to a lot of our customers via our commercial division, which we call "NYSE Technologies." By virtue of all the turbulence that's happened in the world, especially in the financial markets in the last couple of years, a lot of our customers -- big ones, small ones, banks, brokerages and everyone in between -- said the infrastructure that we traditionally have supported within our own companies, is a new model that we could adapt, given these technologies that are available, and given that we NYSE Technologies wants to provide these services. We asked if we should take a different look at what we are doing and see if we should pursue some of these things.

What it comes down right down to is that many of these companies said that maintaining their own infrastructure is not a competitive advantage for them. It's really a cost of doing business like telephones and office furniture. It would be better if someone else helped them with it, maybe not 100 percent, but like we propose to do, and everyone wins. They get lower cost and they get to offload a burden that wasn't particularly strategic to them.

We say we can do it with good service and at a good price, and everybody comes away a winner. So we launched this program this summer, with one offering called "Compute on Demand," which has a number of attributes that make it different than your run-of-the-mill public cloud.

In the capital markets community, we have some attributes of infrastructure, a higher requirement, that most companies wouldn't care so much about, but in our industry they are very, very critical. We have a higher level of security than an average company would probably pay attention to.

And reliability, as you can imagine. The markets need to be up all the time when they are supposed to be open. A few seconds makes a big difference. So we want to make sure that we pay extra attention to reliability.

Another thing is performance. Our industry is very performance-sensitive. Many of the executions are measured in micro-seconds. Any customer of ours, including ourselves, are sensitive to make sure that any infrastructure that we would depend on has the ability to make sure that transactions happen. You don't find that in the run-of-the-mill public cloud because there just isn't a need for the average company to do that.

For that reason, we thought our private offering, our community cloud, was a good idea. By the way, our customers seem to be nodding their heads a lot to the idea as well.

Gardner: Why have it as a hybrid model?

Rubinow: In the spirit of trying to accommodate all the needs that people will have, for many of the cloud services, you get the most leverage out of them, if you as a customer are situated in the data center with us.

Many customers choose to do that for the simple reason of speed-of-light issues. The longer the network is between Point A and Point B, the longer it takes a message to get across it. In an industry where latency is so important, people want to minimize that distance, and so they co-locate there. Then, they have high-speed access to everything that's available in the data center.

Of course, customers outside the data center certainly can have access to those services as well. We have a dedicated network that we call "SFTI," Secure Financial Transaction Infrastructure. That was designed to support high speed, high reliability, and high resiliency, things that you would expect from a prominent financial services network. Our customers come to our data centers over that network, and they can avail themselves of the services that we have there too.

We have historical data that lot of our customers would like to take a look at and analyze, rather than having to store the data themselves. We have it all here for them. We have applications like risk management and other services that we intend to offer in the future that customers would be hard-pressed to find somewhere else, or if they could find it somewhere else, they probably won't find it in as efficient a manner. So it makes sense for them to come to us to take a look at it and see how they can take advantage of it here.

Gardner: Tell us about your organization, your global nature, and where you expect to deliver these cloud services over time.

Rubinow: The full name of the company is NYSE Euronext, and that reflects the fact that we are a collection of markets not only in the United States but also in Europe. We operate a number of cash and derivative exchanges in Europe as well. So we talk about the whole family being part of NYSE Euronext.

We segment our business into three segments. There is the cash business, which is global. There is the derivatives business, which is global, and those are the things that people would have normally associated our company with, because the thing we've been doing for many years.

The newest piece of our business is the piece that I've referred to earlier and that's our commercial technology business, which we call "NYSE Technologies." Through that segment of the business, we offer all these services, whether it be software products we might develop that our customers take advantage of or services as we've already referenced.

In a small way, over the years, we've been offering these services to our customers, and then a couple of years ago we decided to do it in a much bigger way, because we realized the need was there. Our customers told us that they would take advantage of these services. So we made a bigger effort in that regard. Right now, the commercial part of our business is several hundred million dollars a year in terms of revenue.

I have to add one note in terms of latency. For people who aren't familiar with our obsession with latency, the true textbook cloud profile means that one could execute cloud-like services. If we had 20 data centers across the world, they could be executed across any of those data centers and transparent to the customer as long as they get done.

In ours latency-sensitive world, we are a little bit constrained with some of the services that we offer. We can't afford to be moving things around from data center to data center, because those network differences, when you're measuring things in micro-seconds, are very noticeable to our customers. So some of our services could be distributed across the world, but some of our services are very tied to a physical location to make sure we get the maximum performance.

To add further to that, one of the cornerstone technologies, as we all know, of cloud computing is virtualization. That gives you a lot of flexibility to make sure that you get maximum utilization of your compute resources.

Some of the services we offer can't use virtualization. They have to be tied to a physical device. It doesn't mean that we can't use a lot of other offerings that VMware (NYSE: VMW) provides to help manage that process, but some are tied to physical devices, because virtualization in some cases introduces an overhead. Again, when you're measuring in micro-seconds, it's noticeable. Many other of our services where virtualization is key to what we do to offer the flexibility in cost to our customers.

So we have kind of a mixed bag of unique provisioning that's designed for the low-latency portion of our business, and then more general cloud technologies that we use for everything else in our business. You put the two of them together and we have a unique offering that no one else that we know of in the world offers, because we think we're the first, it's not among the first, to do this.

Gardner: So this is a rather big business undertaking for you. This cloud is really an instrument for your business in a major way.

Rubinow: That's right. Sometimes we think the core of our business is trading. That is the core. That's our legacy That's the core of what we do. It's a very important source of our business, and it generates a lot of the things that we've been talking about. Without our core business we wouldn't have the market data to offer to our customers in a variety of formats.

The technologies that we used to make sure that we were the leader in the marketplace in terms of trading technology and all the infrastructure to support that, that's also what we're offering our customers. What we're trying to do is cover all the bases in the capital markets community, and not only trading services, which of course is the center of what we do and it's core to everything that we do.

All the things that surround that our customers can use to support their traditional trading activities and then other things that they didn't used to look to us to do. These are things like extensive calculations that they would not have asked the NYSE to do, but today they do it, because we provide the infrastructure there for them.

Gardner: What are some of the underlying numbers perhaps of how this works economically?

Rubinow: From a metrics standpoint, it's probably too early to provide metrics, but I can tell you, qualitatively speaking, the few customers that we have that were early adopters are happy to get on stage with us and give great testimonials about their experience so far. So that's a really good leading indicator.

Again, without offering numbers, our pipeline of people wanting these services globally has been filling very nicely. So we know we've hit a responsive chord. We expect that we will fulfill the promises that we're offering and that our customers will be happy. It's too early, though, to say, "Here's three case studies that show, our customers are saying how it's gone, because they haven't been in it long enough to deliver those metrics.

When we were putting together our cloud architecture and thinking about the special needs that we had -- and I keep on saying it's not run-of-the-mill cloud architecture -- we we're trying to make sure that we did it in a way that would give us the flexibility, facilities, and cost that we needed. Many of the things needed to be done from scratch, because we didn't have models to look for that we could copy in a marketplace.

And we also realized that we couldn't do it ourselves; we have a lot of smart people here, but we don't have all the smart people we need. So we had to turn to vendors. We were talking to everyone that had a cloud solution. Lots of vendors have lots of solutions. Some are robust, and some are not so robust.

When it came down to it, there were only a couple of vendors that we felt were smart enough, able enough, and real enough to deliver the things to us that we felt we needed to get started. I'm sure we will progress over time, and there will be other people who will include the picture.

But VMware was at the top of that list of technologies that we have been using internally for several years, been very happy with. Based on our historical relationship with VMware and the offerings that VMware have in the traditional VMware space, plus the cloud offerings, things like Cloud Director and other things, that we felt that those were good cornerstone technologies to make sure we have the greatest chance of success with few surprises.

And we needed partners to push the envelope, because we view ourselves as being innovative and groundbreaking, and we want to do things that are first in the industry. In order to do those with better certainty of outcome, you have to have good partners, and I think that's what we found at VMware.

Gardner: What did you learn? Is there any 20-20 hindsight or Monday morning quarterback types of insights that you could offer to others who are considering such cloud and/or vertical specialty cloud implementations?

Rubinow: It goes back to the comments I just made in terms of choosing your partners carefully. You can't afford to have a whole host of partners, dozens of them, because it would get very confusing. There's a lot of hype in the marketplace in terms of what can be done. You need people that have abilities, can deliver them, can service them, and can back them up.

Every one of us who's trying to do something a little bit different than the mainstream, because we have a specific need that we're trying to service, has to go into it with a careful eye towards who we're working with.

So I would say to make sure that you ask the right questions. Make sure you kick the tires quite a bit. Make sure that you can count on what you're going to implement and acquire. It's like implementing any new technology. It's not unique to cloud.

If you're leading the charge, you still want to be aggressive but it's a risk management issue. You have to be careful what you're doing internally. You have to be careful who you're working with. Make sure that you dot your I's and cross your T's. Do it as quickly as you can to get to market, but just make sure that you keep your wits about you.

Computing » FSF's Star Turn in the Android FUDathon, Part 1

Posted by echa 12:14 AM, under | No comments

Computing » FSF's Star Turn in the Android FUDathon, Part 1 The only requirement for being allowed to redistribute under the GPLv2 is that you are currently in compliance with the license, and this is how everyone except the "usual suspects" have been treating it on a case-by-case level. There is simply nothing in the GPLv2 license to prevent someone who is currently in compliance from redistributing, and it is in nobody's interest to imply otherwise.

My first thought was that someone was engaging in click-bait journalism. Even the title of the post -- "Android GPLv2 termination worries - one more reason to upgrade to GPLv3" -- is something I would expect from anti-Android trolls, not the Free Software Foundation.

The conclusion at the bottom of the article, that companies using Android should urge Linux developers to switch to the GPLv3, is so bad it's not even wrong. It betrays a singular unawareness of the mobile market that Android serves.

Mobile phone manufacturers don't make different silicon for each market -- instead, they customize the software so that the phone can be type-approved by regulators and carriers in each country individually. Things like maximum transmitter output, radio channels, and how the device interacts with the cellular network all need to be customizable, and the device needs to be tamper-resistant.

A GPLv3 Android phone, with all the decryption keys available to any user on demand, is a non-starter. No manufacturer will make such an insecure-by-design device. No telco would put the stability of its network at such risk. No informed consumer would want one.

So what about those GPLv2 "permanent" terminations?

A New License Is Only a Download Away

"Take-it-or-leave-it" licenses like the GPL are a form of contract known variously as a "contract of adhesion," "boilerplate contract" or "standard form contract." As such, they are subject to special rules that require any ambiguities to always be resolved in favor of the recipient (contra proferentem). This "you made your bed, you sleep in it" approach is the same one we learned when we were kids -- whoever cuts the birthday cake can't complain about getting a smaller slice.

Contrary to the article's claim of "permanent termination" for violating the GPLv2 license, it's very easy to get a new license to resume distribution of a GPLv2 program. Just download or otherwise get a new copy, as per section 6 of the GPLv2, and you automatically receive a new license grant, which is valid for as long as you remain in compliance.

While this doesn't "whitewash" any problems that arose under the old license grant, it's clear that the new license cannot have additional restrictions, such as a past license termination, imposed on it.

6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. (emphasis added)

What does contra proferentum mean for entities that had a GPLv2 license instance terminated? Among other things, if they return to compliance, they have every right to rely on the automatic license grant provisions of section 6 of the GPL when they obtain a new copy of the program.

The word "permanently" never appears in the license, and any ambiguity as to whether the termination of a previous license under section 4 prohibits them from getting a new license must be resolved in their favor. Not that there's much room for ambiguity -- "Each time ... receives a license" makes it clear that every copy comes with its own license instance.

Quick Summary for the TL;DR Set

While it is true that section 4 of the GPLv2 license terminates your right to redistribute when you fall out of compliance, section 6 is equally clear when it states that you get a valid license from the copyright-holder with each new copy you receive. Resuming distribution is simply a matter of returning to compliance and downloading a new copy.

It's true that this won't "fix" previous compliance problems; depending on their nature, they may have to be negotiated with the copyright-holders or decided by a court, but the threat of the ultimate "big stick" -- of never being able to resume distribution with the new license automatically granted under section 6 -- is an attempt to impose restrictions that neither a plain reading of the license nor the rules dealing with take-it-or-leave-it contracts allows.

My First Email

I went to the source to try to clarify both points -- changing Linux to GPLv3 and terminations under GPLv2 licenses. I wrote the author, Brett Smith, on August 22:

Hi:

In an article posted on the fsf website, you wrote the following: "Companies that sell products that use Android can help out by encouraging the developers of Linux to make the switch to GPLv3."

Unlike most GPL-licensed software, linux is licensed as "GPL version 2," not "GPL version 2, or at your option any later version."

Linux simply cannot ever be switched to GPLv3 without a significant rewrite, because at least some of the people whose code is in linux are now dead, and others will refuse because they have no problem with GPLv2 license terms.

This has been pointed out time and again. Under copyright law -- which is the basis of the GPL -- Linus doesn't have the ability or the right to take code written and owned by other people, that was licensed by them exclusively under the GPLv2, and change the license to "GPLv2 or later."

It would almost be easier (and would certainly avoid the whole "derived works" problem) to switch to BSD. Fortunately, there is no need to consider that -- the "problem" of forever losing the right to redistribute even after you are back in compliance simply doesn't exist.

The only requirement for being allowed to redistribute under the GPLv2 is that you are currently in compliance with the license, and this is how everyone except the "usual suspects" have been treating it on a case-by-case level. There is simply nothing in the GPLv2 license to prevent someone who is currently in compliance from redistributing, and it is in nobody's interest to imply otherwise.

Unfortunately, your article has already attracted the attention of people, and is generating FUD around Linux usage, as "proof" that Linux is unsafe for business.

Thank you for your attention to this matter.

Barbara Hudson

Computing » Chronicles of Desktop Deaths Foretold

Posted by echa 12:11 AM, under | No comments

Computing » Chronicles of Desktop Deaths Foretold Consultant and Slashdot blogger Gerhard Mack has a desktop, a laptop, an HTC Desire Z cellphone and a work-provided Galaxy Tab. "Care to guess which one I use the most?" he asked. "It's the desktop. My desktop has more power than the rest of my devices put together, the keyboard is at the proper typing height, and the monitors are on an ergonomic stand to keep my neck from being strained."

Now that September has arrived at last, life has taken on a different tone here in the Northern reaches of the Linux blogosphere.

After all, just around the corner now are crisp and cool days, Halloween, and the crunch of fallen leaves underfoot as nature prepares for its long winter sleep.

It's perhaps no great surprise, then, that many thoughts seem to have turned to death and dying in this season of decay. No longer confined to a few heavily air-conditioned bars and saloons, bloggers have begun to lift their heads and ponder the end of things -- not just in the natural world but in technology as well.

Death All Around

"The end of the OS is nigh," read one headline not long ago, for example.

"Desktop computers changing, not dying" insisted another.

And again: "Desktop: 'The report of my death was an exaggeration,'" read yet another.

There's been a distinctly morbid focus in the Linux blogs lately, in other words, and Linux Girl wanted to learn more.

'The Desktop Is Here to Stay'

"What is 'death' here?" mused Chris Travers, a Slashdot blogger who works on the LedgerSMB project. "It seems to me that what people are saying is not that we won't use these things, but that they won't occupy the central role in our lives that they have in the past. In all cases, we are talking about trends that are exaggerated."

Desktops, for example, "will always be extremely handy forms of computers," Travers told Linux Girl. "Nobody is going to stop using a desktop just because they now have a series of mobile devices. Desktops are too useful in business and at home for that to stop, and they are far less expensive than even laptops of comparable power and reliability."

In other words, "the desktop is here to stay," he asserted.

'The Browser Is More Important'

Same with the OS, Travers added. "While a lot more may run in the browser, that hardly makes the OS less relevant. Something has to provide the base services to the browser."

What's actually happening, then, is that people are simply less attached to the OS, he suggested.

"What the author is actually saying is, 'the browser is more important, so I don't care about the OS anymore,'" Travers concluded.

'My Desktop Has More Power'

"The desktop just isn't trendy anymore," consultant and Slashdot blogger Gerhard Mack agreed. "First the laptop was supposed to kill it and now it's the cell phone and tablet? Bad idea."

Mack himself has a desktop, a laptop, an HTC Desire Z cellphone and a work-provided Galaxy Tab, he told Linux Girl.

"Care to guess which one I use the most?" he asked. "It's the desktop. My desktop has more power than the rest of my devices put together, the keyboard is at the proper typing height, and the monitors are on an ergonomic stand to keep my neck from being strained."

Desktops dominate Mack's workplace as well, he said.

"We would be totally screwed if the desktop went away," Mack concluded, "and I doubt many other offices are different from ours."

'Dying, Not Dead'

Hyperlogos blogger Martin Espinoza didn't dispute the desktop's ultimate demise -- just how soon it would happen.

"The desktop is dying, it's not dead," Espinoza told Linux Girl. "These things don't happen overnight."

People are finally "getting their hands on quad-core phones with HDMI output, so now they have a feasible desktop replacement for the majority of purposes that they carry around in their pocket," he explained. "Since these devices are now in the hands of the public, they may begin to meaningfully supplant the desktop as the primary computer for getting things done."

'Overpriced Toys for Boys'

It's actually laptops that continue to dominate the market, according to Barbara Hudson, a blogger on Slashdot who goes by "Tom" on the site.

"How can they not when the local big-box is selling name-brand 15.6-inch quad-core laptops with 6 gigs of ram and a 750 gig hard drive for (US)$400?" she pointed out. "With this sort of value proposition, laptops are killing desktops and netbooks, as well as giving tablet manufacturers headaches by making tablets look like overpriced toys for boys in comparison."

As for the operating system becoming a commodity, that's just what it's supposed to be, Hudson asserted. "The real question is how long before all applications integrate with the Net seamlessly? Games have been doing it for decades."

'It Won't Happen'

We've seen more than a decade of over-hyped technology, "from Java applets to Html5 and 'native code in the browser,' that was supposed to position the browser as the inevitable successor to the OS for running applications," Hudson continued. "It hasn't happened, and it won't happen."

It's Apple (Nasdaq: AAPL) that has shown the way "out of the browser mess," she added. "The success of Apple's App Store shows that people want programs that are easy to find, buy, install and update. With Windows 8 becoming the last OS to get its own app store or software repository, the window of opportunity (pardon the pun) for the browser to be the unifying platform is being permanently slammed shut."

Operating systems, then, "will continue to battle it out long after all the functionality of the web browser is merged into the OS," Hudson concluded.

'It Isn't Anything Killing Anything Else'

Slashdot blogger hairyfeet didn't see any deaths imminent on the horizon.

Rather, two things are at work, he told Linux Girl. "No. 1 is that tablets are a new toy. The second is something we PC builders and repairmen have known for a loooong time, and it is that once you hit dual core, PCs became 'good enough' for what the masses want to do with them.

"My two boys are playing PC games on 'hand me downs' that are Pentium Ds with Radeon HD4850 cards -- that is, what, 5 year old tech?" hairyfeet explained. "Yet all the MMOs and even the oldest shooters play JUST FINE at native resolution."

So, "it isn't anything killing anything else, it is two separate things that have squat to do with one another," he concluded. "People simply don't need another PC right now and many are simply waiting until XP is EOL before bothering. If you have a dual core with plenty of RAM, why would you buy another machine?"

Like Cars, Pick-Ups and Trucks

Lawyer and Mobile Raptor blogger Roberto Lim put it especially nicely by comparing computing devices with vehicles.

"Our mobile phones and tablets are like our cars: limited in functionality but the most convenient way for us to get around," Lim suggested.

"Laptops are like pick-up trucks -- more capable than our cars and can do some serious work but less convenient to go around in and park in tiny parking spaces," he added.

Desktops and workstations, finally, "are our trucks, which can do some really serious hauling.

"There will always be a place for our pick-ups and our trucks, but most of us really only need cars," he concluded.

'What Is Dying Is the Lock-In'

Blogger Robert Pogson did see a death in progress, but not of any particular hardware or software: "What is dying is the lock-in on the desktop by Wintel," he opined.

Microsoft (Nasdaq: MSFT), for instance, "can no longer dictate what software runs on PCs in general as they did in the past," he explained. "Their tax on IT is still growing as the market grows, but each quarter their 'client' division does a little worse at collecting the tax, and it's not about illegal copying."

Intel (Nasdaq: INTC), meanwhile, "no longer dictates that consumers shall have only x86 processors," he asserted. "People are choosing ARM (Nasdaq: ARMHY) and GNU/Linux and Android/Linux and loving it. Ever since the netbook showed GNU/Linux did work on the desktop, the end was near."

'People Are Choosing a Linux OS'

This, then, "is the last year that M$ is in the running for desktop market share," Pogson predicted. "They are losing almost 1 percent share every month."

The reason for that, he added, "is Android/Linux. Ordinary people are choosing a Linux OS when there is fair competition and choice on retail shelves. Just wait until Ice Cream Sandwich kicks in at the Christmas season..."

Related Posts Plugin for WordPress, Blogger...