Sunday, September 4, 2011

Mobile Tech » Defending the Mobile Universe From a Fraudster Onslaught

Posted by echa 9:14 PM, under | No comments

Mobile Tech » Defending the Mobile Universe From a Fraudster Onslaught As smartphones and other mobile devices move to center stage, consumers and retailers are being threatened by a fresh crop of fraudsters bent of wreaking havoc in the mobile universe. Although walking a tightrope between consumer convenience and fraud prevention is tricky, there are several ways consumers and retailers can protect themselves, while at the same time maintaining real-time response and analysis in mobile commerce.

The mobile age has arrived.

In 2011, global shipments of smartphones and tablet devices surpassed shipments of laptops and desktop PCs, laying the groundwork for an era in which consumers are increasingly using mobile technology for everything from airline reservations to vehicle purchases.

The mobile age snuck up on many of us. But one group of mobile-savvy users has been hard at work, waiting for mobile to rise in the hierarchy of commerce: fraudsters.

The frightening reality is that most retailers and consumers aren't prepared for the tidal wave of fraud that is already being unleashed in the mobile arena. To inoculate themselves, retailers and consumers need to understand which mobile behaviors put them most at risk.

More importantly, they need to know how to navigate those behaviors to create an effective mobile fraud prevention strategy.

Pre-Mobile Fraud Prevention

In the "old days," fraud prevention took on many different forms. One of the simplest forms of fraud prevention in e-commerce was to verify the customer's credit card number, CVV code, billing location and shipping address.

From there, online fraud prevention progressed to include device identification via IP addresses and tagging devices like Flash and cookies. Although these tactics were initially effective, fraudsters adapted their strategies to incorporate the use of proxy servers to hide their true IP address, among other techniques.

Today's most sophisticated fraud-prevention providers employ cookieless device identification, real-time proxy piercing for true IP address detection, intelligent packet inspection for subversion detection and other strategies to stay one step ahead of cybercriminals.

Yet the transition to mobile is creating new challenges in fraud prevention, many of which demand a more proactive approach on the part of retailers and consumers alike.

Mobile Behaviors and Vulnerabilities

Malware is a major threat for mobile consumers and retailers. More than 73,000 new malware threats are released every day, driven by consumers' willingness to download unproven apps that haven't been properly vetted by platform providers. While open platforms like Android have been proven to be more susceptible to attacks, Apple's (Nasdaq: AAPL) iOS is by no means vulnerability-free.

Most mobile users don't install antivirus or antispyware software on their devices. That makes mobile technology an easy target for criminals eager to obtain personal data and online account credentials, accomplished by introducing malicious code through apps, social media sites and other entry points.

Mobile fraud detection is complicated in that it is difficult for online retailers to pin down the source of fraudulent transactions. While IP addresses for PCs are roughly fixed to a single location, mobile users routinely connect through WiFi networks and 3G gateways scattered across a broad area. This makes it problematic for retailers to effectively monitor threats using IP addresses.

Another worrisome feature is that mobile devices have limited digital fingerprints. Today's consumers remove cookies and gravitate toward mobile devices that eliminate the use of Flash technology (e.g. iPhones) -- crippling the ability of many online retailers and financial institutions to differentiate between returning customers and fraudsters.

One of the most challenging aspects of mobile fraud prevention may lie in the expectations of consumers themselves. More than ever, consumers expect mobile technology to deliver immediate gratification. To meet the needs of their customers, retailers provide opportunities for the instant authorization of goods.

Consequently, transaction speed eliminates the possibility of manual reviews, allowing mobile crooks to exploit automated purchasing capabilities and quickly offload stolen merchandise around the world.

Mobile Fraud Prevention Strategies

2011 was the first year that mobile transactions became a material percentage of revenue for companies. The potential for serious mobile fraud is challenging retailers to identify solutions that improve security without significantly impacting the mobile customer experience.

Although walking a tightrope between consumer convenience and fraud prevention is tricky, there are several ways consumers and retailers can protect themselves, while at the same time maintaining real-time response and analysis in mobile commerce.

1. Current transaction mix review

For retailers, mobile fraud prevention begins with a thorough review of the current makeup of fraud transactions and prevention tactics to determine the impact of mobile on existing device identity and behavior-based fraud filters. In many instances, retailers may discover that their current approach allows free access for mobile fraudsters while limiting access to legitimate customers. For example, a common attack vector is to change browser settings to make a PC looks like a mobile device in order to target lax mobile-specific rules.

2. Reliance on mobile Web for application authentication and authorizations

A mobile application can provide a superior user experience in terms of responsiveness and interactivity. When it comes to moving money or authenticating a high-risk transaction, however, companies should fall back to using tested and proven Web technologies. Using HTML5, companies can have a multifunctioning Web and mobile site that is trivial to integrate into a mobile app. Trying to re-invent the wheel and putting too much trust in the mobile device is a recipe for trouble.

3. Centralization of fraud intelligence

While many companies have different teams and technologies supporting their mobile versus Web strategy, it is important that fraud intelligence is consolidated across the same risk engine. In addition to improving fraud detection rates, centralization allows retailers to better manage the cost of fraud prevention.

4. Behavior and location profiling

With mobile location quickly becoming a reliable user signature, security-based apps can leverage mobile GPS technology to create profiles based on daily patterns. The catch is that consumers must sacrifice a certain amount of privacy -- plus valid use of location data is context-specific. For example, you may be willing to let your bank use your GPS location on a one-off basis, perhaps to verify a fund transfer from your account. You would not, on the other hand, find it acceptable for them to continually track all of your movements.

5. Layered fraud prevention

No security measure is foolproof. Eventually, cybercriminals will find a way to breach any authentication method, no matter how sophisticated. Layered fraud prevention offers greater security because it presents multiple security barriers, increasing the level of difficulty for fraudsters. For example, many iPhone app developers rely on the iPhone UDID, a unique hardware identifier, to recognize returning customers. However, jailbroken iPhone apps are able to easily spoof this.

For consumers, mobile fraud prevention boils down to a handful of common sense behaviors and practices. Since many app stores lack advanced malware detection systems, consumers should be cautious about downloading apps from unknown providers. Likewise, links contained within text messages should be treated with a healthy dose of skepticism.

Most importantly, consumers should direct their mobile purchases toward trusted retailers. Going forward, the mobile marketplace will reward retailers that take meaningful measures to improve mobile security and equip their customers with convenient fraud-prevention tools.

Mobile Tech » The Wedding Crashers

Posted by echa 9:08 PM, under | No comments

Mobile Tech » The Wedding Crashers The U.S. Department of Justice has made its move to prevent AT&T from merging with T-Mobile, filing an antitrust suit to stop the purchase. It's the biggest roadblock yet in what was never expected to be an easy deal, and now it's unclear whether the companies have any hope of winning the fight. Meanwhile, HP has one last go with TouchPad, iTunes changes the channel, and WikiLeaks dribbles more than it meant to.

Nobody expected AT&T (NYSE: T) to have an especially easy time convincing regulators to allow it to buy up rival wireless carrier T-Mobile. AT&T announced its intentions last Spring to purchase the fourth-largest U.S. carrier from parent company Deutsche Telekom (NYSE: DT) for US$39 billion, and critics from all corners wasted no time expressing why they thought that would be a very bad idea.

But that's not to say everyone thought it would be impossible. If the prevailing winds of antitrust regulation weren't strong enough to knock Comcast's (Nasdaq: CMCSK) bid for Universal off course, then who's to say AT&T's deal wouldn't eventually fly too?

Now, though, it looks like the proposal has encountered its biggest blow yet, and it may end up crushing the merger completely. The U.S. Department of Justice has filed a civil antitrust suit to block the buyout, claiming such a deal would significantly hurt competition in the U.S. wireless market. If allowed to go through, the purchase would end up hurting consumers through higher prices, diminished service quality, fewer choices and slowed innovation, according to the DoJ.

Just as the suit was announced, the U.S. Federal Communications Commission chimed in with a message of support for the Justice Department's action.

Over the last few months, AT&T has taken every opportunity it could get to convince regulators, watchdogs, consumers, you, me and every other living thing on the planet that the merger was a great idea. Just as the suit was announced, AT&T was busy publicizing a new reason everyone should get behind the deal: jobs. Letting the company buy up T-Mobile would enable it to bring 5,000 outsourced jobs back to the U.S., the company claimed. It's still not clear how many existing U.S. jobs the merger would have eliminated, though.

Listen to the podcast (14:27 minutes).

The news of the DoJ's lawsuit comes as a stroke of vindication for groups opposed to the buyout, of which there are many. Sprint (NYSE: S) would probably have been the carrier with the most to lose from such a deal, and it's been loudly critical almost since Day 1. Consumer groups have also expressed concern. Verizon hasn't exactly been at the forefront of the opposition, even though a combined AT&T/T-Mobile would have knocked it off its perch as the biggest carrier in the U.S. by several million customers.

Meanwhile, it's probably the worst-case scenario for AT&T, which seemed more or less suckerpunched by the DoJ's lawsuit. Wayne Watts, a top AT&T lawyer, said that his company sensed no indication that the department was even considering doing something like that despite multiple meetings with DoJ representatives. He said AT&T plans to fight the suit in court.

Investors were smacked hard too. The morning the news came out, AT&T shares dipped as much as 5.5 percent. And losing the deal could cost AT&T a lot more than a bit of dignity and a few months worth of lawyer fees. Back when it was negotiating the deal, the carrier reportedly agreed to pay Deutsche Telekom $3 billion for its trouble if the purchase were somehow scuttled. With that kind of money on the line, AT&T will no doubt fight tooth and nail in court, but it also might mean it ends up agreeing to some unusually large concessions if the DoJ offers to cut it a deal.

A compromise may not be what the DoJ has in mind, though. Given the unexpected arrival of this lawsuit, it looks like the department wants this deal dead, plain and simple.

If it turns out AT&T simply cannot get what it wants, though, it's not all doom and gloom. The carrier insisted in the past that buying T-Mobile was the only reasonable way it could expand its 4G services to cover most of the U.S., but that's open to debate. A leaked memo from a few weeks ago almost made it look like the real reason it wanted to buy T-Mobile was to keep it out of Sprint's hands.

As for Deutsche Telekom, if the deal dies, it'll likely never find another buyer willing to take T-Mobile off its hands for anywhere near $39 billion. It might go crawling back to Sprint, though even a merger between those two smaller networks might still agitate antitrust regulators. Without some kind of outward assistance, though, it's unlikely T-Mobile will be able to be a competitor in the 4G arena. It could perhaps start buying even smaller carriers to expand its reach, but that definitely does not play into Deutsche's plans to exit the U.S. market ASAP.

Leak, Spill and Dribble

WikiLeaks is no stranger to informational security disasters, but usually it's the one doing the tattling. This time, though, the leaker has become the leakee. Sort of.

The information relates to CableGate, that mountain of diplomatic cables WikiLeaks distributed last year. What's different this time is that those same cables can now be viewed in unredacted form -- the names and identities of confidential sources are left visible. As leak-happy as WikiLeaks is, it never intended to let the world see the uncensored versions of those files.

In a way, this unredacted version of CableGate has been freely floating around the Internet for some time, but all locked up with encryption. You could get the files easily enough, but in order to unscramble and make any sense of them, you'd need to have the encryption key -- in other words, a long, complex password.

That key is what's been exposed, and according to WikiLeaks, it was all the fault of a nosy little newspaper. Several months ago, a writer associated with the UK's The Guardian was putting together a book on WikiLeaks, and in the process he was given a password from WikiLeaks officials to decrypt those unredacted files. That password was printed verbatim in the book, and now it seems anyone who can grab that original encrypted data from a file-sharing network can apply the key, get access, and find out exactly who blabbed what.

But The Guardian has its own version of events. It claims that the reporter used that key to access the encrypted files over a secure server, access to which was tightly limited. He was made to believe that the data would be removed from the server in a matter of hours, at which point the encryption key would be worthless. He thought WikiLeaks would just shut down access and change the code after he was finished having a look, so no big deal if it was published in the book.

What that reporter didn't know, according to The Guardian, was that the same file, protected with the same key, was floating around all over the Internet, being traded around by various WikiLeaks aficionados who just wanted to keep the data alive, even if it was inaccessible to them. So when that key was published, it still applied to a very much alive and very dangerous file, one that could get some people in deep, dark trouble.

Oddly enough, the tension on this has been brewing for weeks. Only now has WikiLeaks publicly commented on it because it believed the story was gaining so much traction that there was no longer any point in ignoring it. Now it's getting ready to sue The Guardian as well as an individual in Germany.

Aside from the harm this may cause for people identified in the data dump, it's also put WikiLeaks' credibility at risk. The site depends on its reputation for being a place where whistleblowers can go to anonymously leak information that they think should be out there. Whether they're right or wrong, they could have serious reservations about contacting the site if a simple misunderstanding like this is all it might take to out them.

This Is Your Company on Drugs

Google (Nasdaq: GOOG) may not be much of a drug dealer itself, but it certainly will take ad dollars from drug dealers. Mostly that means legal online pharmacies. Look hard enough and you may even find a few ads from oddball vendors dealing in strange potions and herbs that are strong enough to get you plenty blasted but aren't quite on the other side of legal -- not yet, anyway.

But for a while there, Google was carrying on some ad activities that the U.S. Department of Justice said were very much against the law. According to the DoJ, Google's AdWords program was running ads from Canadian pharmacies that targeted U.S. buyers, which is a serious foul. The investigation lasted for years, but Google settled the problem in August by paying an enormous amount -- half a billion dollars, which was the estimated revenue from all that shady advertising.

That settlement wasn't the end of the headaches for Google, though. It's paid off the DoJ, but now a shareholder wants a pound of flesh too. The Google stock owner reportedly claims the company breached its fiduciary duty by facilitating illegal drug imports and filed false annual reports over a period of six years by not disclosing the revenues earned in doing so.

If the suit takes root, it could soon be joined by lots of other shareholders as a class action.

But at first glance, the case doesn't appear to be an easy win. For one thing, Google's fine. That revelation of the whole problem and the amount of the settlement were barely a blip on the radar as far as the company's stock value was concerned. Shareholders who complain they lost money because of any sort of malfeasance on Google's part might have a hard time connecting any substantial losses to this one particular affair.

Also, the argument that Google falsified records looks like it could be pretty flimsy. I guess it's possible there's some smoking gun that hasn't been revealed yet, but fudging a public company's official balance sheet is a serious infraction. Maybe some companies do it once in a while, even ones as prominent as Google. But would the CFO of the Internet's most powerful company really put it all on the line just to hide a few piddly little drugstore ads, the profits on which hardly amount to a drop in Google's massive ocean of money? Hard to believe it happened -- it just sounds too stupid.

Changing the Channel

Apple's (Nasdaq: AAPL) iTunes service made a small change to its lineup recently that could play into much wider plans for entertainment.

It eliminated 99-cent TV show rentals. This was where you could pay a buck, download a TV show, watch it, and then forget about it, because it would delete itself automatically from your hard drive in a few days. For anyone who isn't into watching reruns on purpose, it was a pretty good deal. It came out right around the time the revamped Apple TV arrived on the scene, and TV show rentals were one of the device's big selling points.

But no more of that. Movies can still be rented, but TV shows must be purchased for $2 to $3 per. They stay on your hard drive until you delete them. A dollar or two difference may not be that big a deal, depending on your entertainment budget, but this was a simple, customer-friendly offering that Apple promoted pretty hard in its short existence. So why yank it after only about a year?

Apple said it was all about customer demand. According to them, TV show rentals never really caught on. And who knows, maybe the TV networks applied a little pressure themselves. Apple's entertainment delivery model is going through some big changes soon, and it's not so much of a stretch to think maybe Apple had to do a little give and take to get what it wanted.

The change that's coming is iCloud, the Apple syncing service that'll let you share data across Apple devices in new ways. And users of Apple TV have noticed recently they can start actually streaming TV shows they've purchased in the past -- even ones they bought so long ago they can't remember buying them. It used to be you could just stream rentals and stuff saved somewhere on the local network.

So it looks like iTunes TV is coming to a point at which everything you ever buy will be accessible anywhere through your iDevices -- it's just that when it comes to TV, you'll have to buy it, not rent it. You'll be paying more, perhaps, but at least you'll have a nice, big library of reruns.

One More Time

HP's (NYSE: HPQ) TouchPad tablet burned out quickly, but that's not because it burned all that brightly. It was a lousy seller from Day 1. Mega-retailer Best Buy (NYSE: BBY) reportedly complained about mountains of unsold units. And it's entirely possible that the TouchPad would have withered away over many slow and painful months had it been allowed to live.

We'll never know for sure, though, because HP put it to sleep just six weeks out of the gate.

But then HP pulled another odd move: It says it's going to go in for one more round of TouchPad production. It's still a dead tablet -- HP really is killing its webOS device lineup. But the company's going to press out another batch of them, even after that discontinuation has been publicly announced -- a sort of anti-victory lap, I guess.

HP says this last hurrah is due to the incredible spike in popularity the TouchPad experienced after its death warrant was publicized. Suddenly TouchPads were hot, and it wasn't because people wanted them as museum pieces or cheese plates. It was because prices took a nosedive to $99 -- and this was for a tablet you used to have to pay $500 to get. Sure, it's been dead-ended by its manufacturer, and webOS might fade into oblivion soon too, but that doesn't make the tablet worthless, and $100 was apparently an attractive price for a lot of buyers.

If HP was desperately trying to rid itself of a dead product, then that price drop was definitely a very effective distribution laxative. But if clearing its house of TouchPads really was what HP was trying to do, why make even more of them? HP says it's going to sell this next batch of TouchPads for an MSRP of $99, and it's a big stretch to believe that could be anywhere close to profitable.

Factors like scarcity and uncertainty may have combined with price to drive the TouchPad's postmortem popularity, and it looks like HP's trying to keep that alive for this last production run too. It says it doesn't know when they'll come in or how many there will be.

Perhaps the company has a bunch of unassembled TouchPad components lying around. That wouldn't be surprising given how suddenly and unexpected it axed the TouchPad in the first place. Maybe putting them together and selling more TouchPads for an almost embarrassingly low price is actually the least-costly option.

IT Management » Federal CIOs: With More Authority Comes More Accountability

Posted by echa 9:01 PM, under | No comments

IT Management » Federal CIOs: With More Authority Comes More Accountability "When you look at the memo, as well as the blog from Steven VanRoekel, they both point to a positive and welcome attitude toward the CIO function," said Forrester analyst Chip Gliedman. "On the other hand, what they describe is really the natural function of a CIO, whether in the public or private sector. So it's a little worrisome that they felt the need to state this in such a formal way when it's mainly describing the obvious."

A major initiative to improve federal information technology management, including IT procurement, got a boost recently, even though the person behind the reform had left federal service. Just days after taking over as federal chief information officer, Steven VanRoekel posted a blog supporting a new role for federal agency CIOs.

The bolstering of CIOs in the IT management process was a key element in a 25-point federal IT reform program generated by former federal CIO Vivek Kundra in late 2010. VanRoekel succeeded Kundra earlier this month.

VanRoekel based his blog on a recent White House directive that grants CIOs much more authority in managing IT -- but also holds them more accountable.

"As the federal government implements the reform agenda, it is changing the role of agency CIOs away from just policymaking and infrastructure maintenance, to encompass true portfolio management for all IT," said Office of Management and Budget (OMB) Director Jacob Lew in an Aug. 8 memo.

"This will enable CIOs to focus on delivering IT solutions that support the mission and business effectiveness of their agencies and overcome bureaucratic impediments to deliver enterprise-wide solutions," he said.

4 Pillars

The OMB directive addresses four aspects related to enhancing the role of CIOs:

Governance. CIOs must drive the investment review process and have responsibility over the entire IT portfolio for an agency. CIOs must work with finance and acquisition personnel to ensure IT portfolio analysis is an integral part of the budget process.

Commodity IT. Commodity IT services are often duplicative and ineffective. CIOs must leverage purchasing power across their organizations to improve efficiencies on 1) infrastructure (data centers, networks, desktops, mobile devices); 2) enterprise systems (email, collaboration tools, identity and access management, security and websites); and 3) business systems (finance, human resources, administration). CIOs need to consolidate and share resources instead of setting up separate independent services.

Program Management. Agency CIOs must improve the overall management of large federal IT projects by identifying, recruiting and hiring top IT management talent. CIOs will be held accountable for the performance of managers.

Information Security. CIOs, or senior officials reporting to the CIO, shall have the authority and primary responsibility to provide security for information collected and maintained by the agency, and for the information systems that support operations. Establishing agency-wide programs with continuous monitoring will be an essential information security tool.

VanRoekel's support for the OMB policy was immediate. "In my time in both the private and public sectors, I know the importance of giving CIOs the tools necessary to drive change and to hold them accountable for results," he said.

Directive Reflects Reality

While the OMB directive may signal a significant change in how the federal government regards CIOs, it should not be considered all that radical.

"OMB's Aug. 8 memo reiterates common sense initiatives that are natural extensions of the increasing focus on IT management and performance, and the need for enhanced program management capabilities," Stan Soloway, president of the Professional Services Council (PSC), told CRM Buyer.

"When you look at the memo, as well as the blog from Steven VanRoekel, they both point to a positive and welcome attitude toward the CIO function," Chip Gliedman, vice president and principal analyst at Forrester, told CRM Buyer.

"On the other hand, what they describe is really the natural function of a CIO, whether in the public or private sector. So it's a little worrisome that they felt the need to state this in such a formal way when it's mainly describing the obvious," he said.

"CIOs always have to juggle budget requirements, maintenance requirements and strategic goals -- and they have to be accountable," Gliedman added. "And if they miss the mark as much as some federal projects have missed, then they shouldn't be in those jobs."

The mention of security in the OMB directive, however, was an element that needed attention, according to PSC's Soloway. "The directive also includes a welcome focus on information security by clarifying that agency CIOs remain important participants in security policy and execution," he said.

Although the directive did not reiterate another key element from the 25-point plan, Soloway added that contact between the public and private sectors is crucial to the role of CIOs.

"As personnel and funding resources become more scarce, it is essential that government and industry expand their communications efforts to ensure that IT programs are effectively and efficiently planned for and executed," he said.

The Federal Buzz: Notes on Government IT

CIO Transition: Newly appointed Federal Chief Information Officer Steven VanRoekel entered his job with the support of a key lawmaker. "I'm happy to hear there will be a smooth transition for our nation's second Chief Information Officer," said Tom Carper, D-Del., noting the "continued importance of managing our nation's over $80 billion federal information technology budget."

VanRoekel "comes with an impressive resume, but he has big shoes to fill," he added.

VanRoekel had been executive director of Citizen and Organizational Engagement at the U.S. Agency for International Development. Before moving to USAID in 2011, he served as managing director of the Federal Communications Commission. Between 1994 and 2009, he was employed at Microsoft (Nasdaq: MSFT), most recently as senior director for the Windows Server and Tools Division.

Federal Facebook Challenge: While residents of the East Coast recover from Hurricane Irene, the U.S. Department of Health and Human Services (HHS), in cooperation with the Federal Emergency Management Agency (FEMA), is seeking help in developing a disaster response resource that features the use of Facebook.

HHS has issued a challenge for professional software developers -- or nonprofessional individuals or entrepreneurs -- to design a Facebook application that makes it easy for people to create their own emergency support network, and provides users with useful tools in preparing for and responding to emergencies.

The top prize for the challenge is $10,000. The deadline for submission is Nov. 4, 2011.

Internet » What's NOT in a Domain Name

Posted by echa 8:55 PM, under | No comments

Internet » What's NOT in a Domain Name The early domain name launch was a disaster. There were no trademark rules followed. It was nothing but anarchy, with domains handed out on a first-come, first-serve basis while intellectual properties were looted in broad daylight. Soon there were 1 million domain registrations per day and a thriving worldwide cybersquatting industry. The confusion created a boom in reckless advertising and led to huge branding and trademark fights.

Esther Dyson, the Great Dame of Silicon Valley, at times matriarch to Bill Gates and many other lads on the innovation circuit, wrote a harsh column Aug. 26 about ICANN's gTLD system, titled "What's in a Domain Name?"

I like and respect Esther, especially for her technical background -- we have shared the podium. However, as this topic deals with the centrality of global corporate nomenclature, it demands an authoritative analysis, and I feel it's my responsibility to clarify a few points.

Dyson begins by asserting that "a name is just a sound or sequence of letters. It carries no value or meaning other than as a pointer to something in people's minds."

On the contrary, a name carries all the value. Without a name identity, a brand is no different than unlabeled goods stacked in warehouses, or global commercial services gasping and dying without being identified.

Imagine eBay, Gucci, Rolex or Google without a name; they would become penny stocks. It's the power of the name -- the perception of value that it creates. Without a name identity, big or small, you have nothing. Only the big branding mentality is logo-slogan centric and looks to a name just as a pointer.

Fruit-Inspired Folly

Dyson goes on to say that it's a trait of modern economies that people are able to distinguish between generic terms -- she gives fruit as an example -- and trademarks, which "refer to specific goods or services around which someone has built value."

Trademark law explained a century ago that generic words cannot be trademarked. Naive entrepreneurs often use them nevertheless, at great risk. Apple was dragged into courts over the conflict with the Beatles' Apple Corps, which eventually resulted in the largest settlement of the period.

Despite all the problems, Apple survived and acquired a worldwide "secondary meaning." Its common use is now quickly associated with the computer company and not the fruit. Orange, Banana, Apricot and many other fruity-named enterprises keep drowning and are often kept alive only via rebranding life support. Despite all the so-called glory attached to a few successes, it is wrong to adopt a moniker that's a commonly used generic name, even though it may be trademarked under a specific "ware" of classification.

In a case when a generic main name is attached to another word to describe its service, the secondary word attached to the name is hardly ever used, and the naked usage of the first word only becomes awkward -- and eventually a big trademark liability. In modern economies, Watermelon Systems and Strawberry Securities are doomed from the start, and only heavy advertising can keep them alive.

Dyson recalls her days as founding chairman of ICANN, when "we more or less followed the rules of trademarks, with an overlay of 'first come, first served.' If you could show that you owned a trademark, you could get the '.com' domain for that name, unless someone else with a similar claim had gotten there first."

Reflecting back on the early domain name launch, I recall that it was a disaster. There were no trademark rules followed. It was nothing but anarchy, with domains handed out on a first-come, first-serve basis while intellectual properties were looted in broad daylight.

Soon there were 1 million domain registrations per day and a thriving worldwide cybersquatting industry. The confusion created a boom in reckless advertising and led to huge branding and trademark fights.

The Necessary Work Ahead

"The value is in people's heads -- in the meanings of the words and the brand associations," insists Dyson, "not in the expanded namespace."

Expanding the namespace does increase value. The value in people's heads comes from a name's accessibility and usability, which in turn add to increased visibility. This visibility increases the value of the brand, as visible brands are chosen over obscure brands. gTLD creates new layers of usability.

The visibility that domain names brought to IBM and Xerox are as significant as the effect TV broadcasting had on name identity.

Apple will have added usage with unlimited dot-apple applications. It can control and allocate at wish creative digital sub-name branding services. Back to the fruity example, which now seems to be attached to products, thanks to its generic disposition. "Apple phone," "Apple iPod" and "Apple computer" are all common phrases because "Apple" is not distinct enough. It isn't necessary to point out that a Rolex is a watch. Pointers are signs of weak names.

Global corporate nomenclature became complex with little room for overly creative, easy-access domain naming, while over-friendly generic use made the branding sector the beneficiary, to the tune of US$500 billion yearly. Weak names need constant oxygen to survive. It's not the complexity; it's the simplicity of the issue that's missing.

Dyson maintains that the new gTLD naming will "create jobs, but little extra value. To me, useless jobs are, well, useless."

It's another myth that gTLD will result in extra work with no value. Has the Internet been good to our society? Have 200 million domain names served our commerce well? With a billion-plus new users coming online, what's useless is to think that extra work is unnecessary.

"One Internet, one world thinking" demands global naming systems that will manage billions of domain names of all sorts and types on a multilingual format for the global population. This is a formidable task that requires continuous expansion on all fronts. It's not a question of useless work -- it's all about facing the future boldly and dealing with the exponential reality of 5 billion Internet users by 2020.

Dyson invites us to imagine owning a valuable patch of land that someone wants to divide into smaller parcels and then charge a fee to protect each one of them.

In fact, the "agrarian" cybersquatter activity originated and thrived under the cheap, no-questions-asked domain name system. It provided more than 10,000 UDRP cases, with lawyers sorting out the disputes at many thousands of dollars each.

From the standpoint of business costs, well-structured and protectable names win their cases with ease. That's not the case with loosely composed generic names that flew out of a dictionary.

"Coca-Cola is that farmer," Dyson continues. "It and other trademark holders are now implicitly being asked to register Coca-Cola in each new TLD -- as well as to buy its own new TLDs."

Famous brands routinely defend against trespassing and win. The bigger question is in the gTLD process, as complicated as getting a city to host the Olympics. How likely is it that brand names -- say, dot-Deloitte or dot-Sony -- would trespass and secure, say, "Coke.Deloitte" or "Coke.Sony" and then offer those domains to the general public, as is done with the current system? To do what -- raise enough money to cover one hour of their electricity bill?

This fear mongering is based purely on the old domain name mentality and is proof of a lack of understanding of the new system among marketers of the world. The old system was designed for easy and free access that enabled widespread stealth of identities. This gTLD system is far too expensive and complicated to encourage petty theft. It will not end the trademark-protection problem, but it will certainly put a sober new face on it.

"The only shortage," maintains Dyson, "is a shortage of space in people's heads."

The only shortage in people's heads comes from poor recall-ability of generic and diluted names. Which United? Which National? Which First? Weak and borderline brands are threatened for many reasons. Primarily, Western brands are facing the brands of emerging economies that are being bolstered through digital compression and portability; they clearly have the cyberbranding edge.

The gTLD is not subdividing any cultivated name space; it will only provide tools to create more customer touchpoints. What exactly did the early domain name bring to our society? Fast-forward change, where unfit name identities broke down and collapsed along the way.

A Serious Game

Dyson refers to a Twitter conversation she had with Annalisa Roger, founder of DotGreen.org, who told her "about the value her group will be adding to .green: marketing, brand identity, raising money for NGOs. But I couldn't help wondering why she can't just add the same value to DotGreen.org."

The $500,000 average cost of a gTLD for any reasonable project is highly affordable in comparison to the production cost of a single TV commercial, or a few full-page ads in major cities, or a logo-slogan, rebranding exercise -- without the launch cost, of course.

A gTLD is not expensive; it is highly justifiable against its components of power and subnamebranding architecture. It's a sophisticated game that demands special maneuvers.

The dot-branded generic names are a high risk entrepreneurial heaven of high risks and high returns. Getting a gTLD for the purpose of cybersquatting someone else's brand is ridiculous. Only cheap, dime-a-dozen domain names make this option look attractive and lucrative.

The bigger challenges are corporate nomenclature-based. Is "Green" really better than "Eco"? Why? Which would sell more? "Tel," "Cell" or "Mobile"? "Car," "Auto" or "Moto"? What's the difference between "Ucar," "Mycar," "iDrive" or "Udrive" from a usability and marketing suitability point of view, and which combination will create more dilution in the long run?

This clearly points to the fear among global advertising agencies and branding services that the centrality of the gTLD application gets extremely intricate at the core of nomenclature. To admit this would be to admit that names do matter. This is where the "names as a pointer" school of thought collapses. One Internet, one world demands one name, one owner thinking.

"Suppose, for example," argues Dyson, "that a cheese maker buys .cheese (as was suggested by one person at a new-TLD meeting recently) and uses it to favor only its own brands?"

Contrary to general perception, generic gTLDs cannot be trademarked. They are licenses to drive a master name identity on cyberhighways and create unlimited subnames to join the race.

The issuance of dot-cheese to Kraft or Bata simply gives them a communication and customer contact point advantage. By no means does it stop others from producing cheese or branding other types of cheeses or creating name brands.

This is an open race for serious marketers -- no different than Walmart buying TV spots during the Super Bowl to maintain its dominance. It's an open market, and all are free to play -- if they know the rules.

"The real innovation has been in companies such as Facebook, LinkedIn, Twitter, and Foursquare, which are creating their own new namespaces rather than hijacking the DNS," Dyson insists.

The social chatter is not name-branding but rather noise creation. The Facebook, LinkedIn types did not add any new nomenclature platform, but offered free usage of registration.

Building a name identity with the intention of commercializing an expandable base can only be achieved when the name management organization is fully committed to deliver such tangibles.

The gTLD is a formidable exercise in this pursuit, and it would be very naive to assume that filling out a customer service card at McDonald's or e-voting creates a namespace. The gTLD, by virtue of its seriousness and size, will be open to a lot of innovative applications along the way. There will be spectacular successes and catastrophic failures on most major new fronts.

"Most of the people active in setting ICANN's policies are involved somehow in the domain-name business, and they would be in control of the new TLDs as well," asserts Dyson.

The Internet without domain names is basically useless. If ICANN is the mother operator of the Internet, obviously apart from security and hard wiring, global naming systems are its prime responsibilities. Success in the creation of domain name devices is the only key to open the hidden universe behind the website.

"Of course, if I am right," Dyson suggests, "the DNS will lose its value over time, and most people will get to Web sites and content via social networks and apps, or via Google (or whatever supersedes it in the competitive marketplace)."

Names have provided eternal longevity to ideas and brands. Social media are a novelty that will run their course in time. Bad names kill good brands. No matter how it's chatted, typed, whispered, called, yelled or found on search, the fact remains: Without a name, there is no brand.

The more unique and powerful the name, the more it climbs to the top; the more diluted, the more it sinks. Social media are just another place where good names can swim. Advertising provides flotation to sinking names. How well a name identity secures global mindshare will forecast the continuation of its success.

Internet » Goodbye TV Show Rentals - and Goodbye External Hard Drives

Posted by echa 8:47 PM, under | No comments

Internet » Goodbye TV Show Rentals - and Goodbye External Hard Drives I don't particularly like the fact that Apple has killed TV show rentals. But I think I can at least understand why it did it. Apple says it was all about popularity, and TV networks may have played a role behind the scenes, but really the elimination of rentals is just part of a broader plan that will simplify media consumption and free us from having to constantly act as our own data micromanagers.

My unhappiness with Apple's (Nasdaq: AAPL) decision to cancel iTunes TV show rentals for 99 US cents led me to get a glimpse of Apple's new iCloud-related world. As it was widely reported, Apple decided to ditch the rental program because consumers overwhelmingly prefer to buy their TV shows.

I don't have the data in front of me, the hard numbers that Apple has showing how many TV shows customers were renting vs. how many shows they were buying. But I can't say I believe Apple on this one.

If Apple showed a pie graph with tiny slice of rental pie, OK, I would take them at their word. But I think two other factors are at play that are much more telling.

First, I don't think the major television networks were ever all that pleased with the Apple iTunes TV show rental program. Basically, I could rent an HD TV show for one-third of the cost to buy the HD version outright at $2.99. So I could watch a great season finale twice, for example, and still come out ahead as a consumer with a big-screen TV in my living room.

And I did.

But Back to the Other Reason Apple Canceled TV Rentals

Regardless of whether or not the economics of Apple and the TV networks were a factor, I think Apple's iCloud direction is a much bigger factor.

When iCloud launches, consumers will be able to download and essentially stream their purchased content from iCloud to all of their Apple and iOS devices, like iPhones and iPads. And if you buy something from one device, it will automatically download to another device over WiFi. For e-books, for example, this would be pretty handy because I might buy an e-book from my iPhone ... and then want to read it later on my iPad. Having it automatically be ready for me is a fantastic consumer-friendly feature.

The same goes for music. Buying something on my iPhone and needing to sync with my MacBook and then get my iPad to my MacBook to sync yet again is all a big pain in the butt. It's not always that much of a pain, of course; for example, if I buy an app on my iPhone all I have to do is find the app on the App Store through my iPad and then I can install it, usually for free if it's an app that supports both the iPhone and iPad.

iCloud, though, seeks to erase these distinctions and make content consumption easy for everyday consumers. If you buy, whatever you bought will be just there for you, easy for you to consume in the Apple universe.

A Glimpse of iCloud Power Today

When I first heard about iCloud, I can't say that I was overly excited, partly because its actual launch date was so far out. It was also partially because I wasn't sure some of the features were particularly compelling. But now that Apple killed iTunes TV show rentals, I noticed a cool new feature that became available in the most recent Apple TV update: streaming TV shows.

Basically, any TV show that you bought through iTunes is now available for instant streaming directly from your Apple TV. Your iTunes account knows your purchase history, and consequently it lets you access the TV show again and again. It's pretty easy to use, and it's awesome. On my Apple TV, I now have access to TV shows that I forgot I ever purchased, even ones that I purchased years ago and subsequently deleted to save hard drive space.

Which brings up another key reason I'm starting to get amped up about iCloud: hard drive storage space. The MacBook Air, for example, just isn't a computer I can buy yet. The reason: The lack of available SSD-based hard drive space -- or rather, the steep cost for enough to keep me happy. To get just 256 GB of SSD space in the 13-inch MacBook Air, it'll cost me $1,600. If I upgraded to a third-party SSD option, it would still cost quite a bit.

The answer: Offload all my space-hungry movies and TV shows and photos to a back-up desktop hard drive, and if I'm lucky, maybe to a wicked-fast Thunderbolt RAID drive. While prices will inevitably come down on Thunderbolt options, using a desktop storage solution to hold my everyday content is not a tidy solution at all.

There's also the option for network-attached storage, too, but that's not particularly tidy either.

No, I very much prefer to have a large hard drive capable of holding all my movies, TV shows, home video, photos and music. Why? Instant and easy access, no matter where I'm located. If I travel for business or pleasure, I hate to leave something behind.

Irrational? You bet. But I'm a consumer, and I don't think anyone has ever accused consumers of being rational.

When it comes to the Apple universe, what's this mean? Take the new Apple TV, which doesn't have an on-board hard drive like the first-generation Apple TV. If you don't have an iTunes-running computer on your WiFi network, your second-generation Apple TV can't stream content from your Mac or PC. In a world where most computers being purchased are laptops, that model sucks! If I take my MacBook to a local coffee shop, no one in my household can access the movies that are on my MacBook. It's worse if a guy leaves for a week-long business trip.

So does a household have to have a Mac mini or iMac then, simply to stay at home all the time and be the central repository for content? Kind of. But that's not efficient either, nor is it particularly tidy. The last thing I have time for is managing a bunch of content, moving it around, making sure my Mac and PCs are authorized, etc.

By offering major types of content through an iTunes-account centered streaming model, and by extension an iCloud model, guys like me no longer have to pack around superfluous gigabytes of data. Just a few hours ago, I was deleting old TV shows to make space on a MacBook hard drive in order to back it up on a slightly smaller external hard drive before upgrading it to Lion. Odds are, most of those TV shows I'll never watch again, but I'll tell you, it was somewhat comforting knowing that I didn't have to track down another hard drive to save them to ... or trash them forever. Apple was keeping my purchase history available to me via my Apple TV, and in the future, via iCloud and my iOS devices.
Maybe a MacBook Air Is in My Future

So, what have I learned? First, by allowing people to rent TV shows via iTunes and their Apple TV, Apple ends up with a very confusing delivery model for consumers -- too many choices. I could rent a show and have access to it after I start watching it for 48 hours, but then it's gone forever ... or I can buy it once and have access to it forever, even if I delete it. Now how do you make that clear in a simply "buy now" interface? Not too hard. But how might that work with iCloud? With multiple devices? It just doesn't. The renting model never did work well between multiple devices because the DRM issues had to be satisfied -- a time-wasting pain.

So for the sake of clarity and easy portability, Apple is reducing a pricing option to make the overall experience better. I get that. I won't always like it, but I understand the basic premise.

Second, I'm learning that maybe I don't have to be so married to a large hard drive in the near future. If all my content purchases could be stored on Apple's servers in the sky, I don't have to worry about keeping that massive 3.35 GB HD version of the Pixar (Nasdaq: PIXR) movie "Up" on my MacBook hard drive. Right now, of course, Apple is only providing this sort of option for TV shows, not for purchased movies. I'm not sure why, but I'm guessing it has something to do with digital rights management and movie studios. Still, I'm hoping that iTunes-purchased movie iCloud storage will be a surprise announcement this fall.

All in all, this means that not only would I get super-easy access to all my cool media content on all my iOS and Mac devices, but I might also be able to enjoy a reasonably priced (and fast) SSD drive in a MacBook Air or even MacBook Pro.

Of course, none of this helps me with my 75 GB and growing iPhoto library -- but that's a personal problem, you might say.

Computing » Microsoft Ties a Ribbon on Windows 8 Explorer

Posted by echa 8:36 PM, under | No comments

Computing » Microsoft Ties a Ribbon on Windows 8 Explorer Microsoft will apparently use the Ribbon graphic user interface first seen with Microsoft Office applications on Explorer, the OS' file management utility. The decision's drawing some mixed reactions. Microsoft said it went with the Ribbon after intense study of user behavior, though some users think the Ribbon is either confusing or takes up too much screen space.

Speculation about what features Windows 8 will include is sizzling as Microsoft (Nasdaq: MSFT) continues to remain tight-lipped about details of the new operating system.

However, Redmond has talked to some extent about the upcoming OS' handling of Explorer, the Windows file management system.

Posts on the Windows 8 blog indicate Explorer will have the ribbon GUI Microsoft Office users know -- and, in some cases, hate. It will also let users mount VHD and ISO drives, possibly doing away with the need for optical storage media.

Company spokesperson Emma Mahoney directed TechNewsWorld to the blog in response to a request for comment.

Boldly Going Where Everyone's Gone Before

Microsoft says it has three main goals for Windows 8 Explorer: Optimize it for file management, create a streamlined command experience, and restore the most relevant and requested features from Windows XP that will fit.

Research through telemetry -- where Windows users agreed to let Microsoft harvest data about their usage patterns of the operating system without tying it to them -- formed the basis of the decisions Microsoft made about Windows 8 Explorer's features.

Telemetry showed that more than 70 percent of usage is for core file management, and that the top 10 commands constitute almost 82 percent of usage, for example.

It also showed that almost 55 percent of commands are invoked with a right-click and another 32 percent using keyboard shortcuts, while only about 11 percent are invoked with the Command bar. Further only two of the top 10 commands invoked in Explorer are available in the Command bar.

After evaluating several approaches, Microsoft decided to use the Office-style ribbon as the user interface for commands. Among other benefits, this offers familiarity to users of Office, Microsoft Paint and Windows Live Essentials, so there's little to learn.

The ribbon will have Home, Share and View tabs; a File menu; and various contextual tabs.

Existing add-ons will work in the right-click menus in Windows 8 but they won't be able to plug into the ribbon UI, Microsoft said.

You can see a demo of Windows 8 here.

Ribbons Aren't for Everyone

At least one analyst is none too excited about having the ribbon as a UI in Windows 8.

"In Explorer, where you're looking at files or documents, you'll give up space to see your commands," groused Michael Cherry, senior analyst at Directions on Microsoft. "Ribbons take up a lot of real estate on the screen, and seeing all the commands isn't as important to me as seeing all the files and documents."

Another possible problem with the ribbon is that it may not be suited to touchscreens, which will increasingly penetrate the PC and laptop market.

DisplaySearch predicts that touchscreen module revenues will hit US$14 billion by 2016, undergoing strong growth in all-in-one PCs, mini-notebook and slate PCs, education and training, and information and self-check-in kiosks.

For example, touch functionality in mini-notebooks and slate PCs will grow from 1 million units in 2010 to 50 million in 2016, DisplaySearch forecasts.

"I'm not convinced ribbons will work with touch and gestures," Cherry told TechNewsWorld.

No More Optical Drives?

Apple (Nasdaq: AAPL) has done away with the optical drive with the MacBook Air, and it looks as if Microsoft may be anticipating the death of the disc drive with Windows 8. Redmond's adding native Explorer support for ISO and VHD files in Windows 8.

This means users won't necessarily require a physical CD-ROM or DVD drive. However, that doesn't necessarily mean Windows 8 PCs will not have optical drives.

"Microsoft doesn't build the PCs, so it's up to the OEMs to decide whether or not they want to build them without optical drives," Directions on Microsoft's Cherry pointed out.

"I don't think optical drives are as necessary as people think they are," Cherry added.

However, if Microsoft does away with the need for optical drives, then users' music and image files have to be transferred over a network, and this means Explorer must enable rapid file transfer, Cherry said.

"The biggest thing about Explorer that frustrates me right now is that, somewhere in the Vista timeframe, file copies became very, very slow," Cherry stated. "Rather than making things pretty [with a ribbon] I'd prefer they made them extremely fast."

One other complaint Cherry has about Explorer is its estimates on how long copying a file takes.

"Their estimates on how long a copy will take are worthless," Cherry remarked. "I'd prefer they give me an accurate estimate of how long it'll take to copy files so I know how long it will be before I can start working with data."

Computing » TrueCrypt Locks Down Data In a Rock-Solid Vault

Posted by echa 8:33 PM, under | No comments

Computing » TrueCrypt Locks Down Data In a Rock-Solid Vault For data sensitive enough to warrant encryption, a tool like TrueCrypt is a great solution. The app creates and encrypted file container of any size on your hard drive or on an external drive. Once mounted using a super-strong password of your own choosing, files can come and go as you please. Once dismounted, they're locked behind a virtually impenetrable wall of encryption.

Linux users are blessed with a collection of file encryption tools. But chances are, whatever application you use for that task lacks the efficiency, speed and functionality of TrueCrypt.

TrueCrypt does what any file encryption application is supposed to: It locks down access to your data so no one without a password or keyfile can grab it. But the process TrueCrypt employs and its toolkit of features separates this file encryption product from other contenders.

It stores your data on an encrypted volume that lets you work seamlessly. This on-the-fly access automatically encrypts the data before it is saved and decrypts when the file loads. Unlike other encryption programs, you do not have to click or drag files to interact with the process.

Another essential difference with TrueCrypt is the level of security it brings to your data. This application controls the entire file system. It encrypts everything from file and folder names to the contents of every file. This encryption locker even includes the volume's free space and metadata.

Getting It

Despite how easy it is to work with TrueCrypt, installing it can be a little bothersome. In my case, the standard compressed download file was useless on my Ubuntu 10.10 installation. I had the same problem with another desktop running Ubuntu 11.04 .

I unarchived the tarball file I downloaded from the developer's site. Then I clicked on the resulting executable file as per the directions provided.

That seemed to install TrueCrypt. The installation process planted an entry in the Accessories menu. But TrueCrypt failed to run when I selected it.

First-time users should open a setup wizard. Until TrueCrypt creates its storage volume, the rest of the program is inaccessible. So nothing happened.

Fixing It

The fix for this SNAFU was simple enough to do. But it should have been offered as a download option from the developer's site to avoid my having to hunt down a solution.

I found a .deb file for TrueCrypt here. The .deb format is a native file package format that works on Debian-based Linux distros. Ubuntu is a Debian derivative.

Once I downloaded the .deb installation file and clicked on it, the Ubuntu package manager cleanly installed it. Clicking on the menu entry then ran the set-up wizard as it should.

Make sure that you have the current version, however. The latest stable version is 7.0a released on Sept. 6, 2010.

Next Steps

TrueCrypt offers some options on setting up the storage media. You can create an encrypted volume in a file, a partition a drive. You also have the option to create a standard or hidden encrypted volume to store your data.

Think of the TrueCrypt storage container as a file. You can place it anywhere, such as the hard drive, an external hard drive or large-capacity USB drive or other external media. Just as you can do with a regular file, you can move it, copy it or even delete it.

Use the location selector window in the GUI (Graphical User Interface) to complete your choices. Once you complete these steps, the file selector window disappears.

Almost Gotcha

Be careful when you select a file name and location. I almost walked into a minefield of errors when navigating through the set-up wizard.

TrueCrypt does not encrypt any existing files when it first creates its file container. If you goof and select an existing file during the setup's naming process, you will overwrite it. That, of course, doesn't encrypt the data -- instead, it wipes that data out.

Not to worry, though. Just be sure you use a unique name for the TrueCrypt volume. Later in the process you can encrypt existing files by moving them to the newly-created TrueCrypt volume.

Big Tip: After copying existing unencrypted files to the TrueCrypt volume, securely wipe the original unencrypted files. To do this, you will need another tool. But again, Linux offers ample options.

Secret Sauce

TrueCrypt is loaded with features. Perhaps the most important choices in using this encryption application are selecting the degree and type of encryption. The setup wizard asks you to choose an encryption algorithm and a hash algorithm for the volume.

I chose the default settings. My work is not rocket science, so I am comfortable with default security levels.

The next decision involves setting the size of the TrueCrypt container. Ample help is available in the on-screen prompts to guide you.

An equally critical decision is setting your password. Again, read the documentation provided to prep yourself about what makes a strong password. Then apply those tips to creating a rock-solid password to protect your encrypted data.

Engage Encryption

My initial reaction to this next final setup step was mild amusement. I'm glad that I did not fully ignore the importance of establishing the cryptographic strength of the encryption keys. Making light of this final setup step would have greatly weakened the effectiveness of TrueCrypt's security.

This process involves randomly moving the mouse randomly about within the Volume Creation Wizard window for at least 30 seconds. Move it around even longer to create a more secure encryption.

Clicking the Format button begins the volume creation process. When it is finished, you will see a new file or volume created in the location you specified earlier in the set-up routine. Do not forget to click the Mount button so you can use the encrypted volume.

Bottom Line

TrueCrypt is an ideal way to place your critical data files in a virtually impenetrable sealed crypt. The encryption process only requires an entry password. Once entered, file operations work in and out of the virtual storage disk much like with a physical disk.

No room for error exists. TrueCrypt handles file encryption and file-saving chores. Even if I forget to unmount the encrypted file or volume, updated content in temporary memory is encrypted and moves to its iron-clad storage vault when the operating system shuts down.

Computing » A Tale of Two Licenses

Posted by echa 2:02 PM, under | No comments

Computing » A Tale of Two Licenses "Now look what's happened," said blogger Robert Pogson. "They've closed a lot of the source on Android 3.x and no one knows where they stand. Can Samsung trust Google now that Google's bought Motorola? How open is the Open Handset Alliance now that some of the code is closed? This is a huge mistake, and FSF is right to push for GPLv3 or better."

Well the wild ride that was August appears to have tapered off a bit as the month drew to a close, so Linux bloggers have finally had a few days to stop and catch their breath.

Bartenders throughout the blogosphere have had a chance to restock their supplies, and conversations have, for the most part, returned to normal volumes.

The one exception in that last respect, however, has been a debate that's actually been gaining momentum over the past few weeks since the Free Software Foundation's Brett Smith published a little piece entitled, "Android GPLv2 termination worries: one more reason to upgrade to GPLv3."

'Make the Switch to GPLv3'

"When we wrote GPLv2 in 1991, we didn't imagine that a free software project might have hundreds of copyright holders, making it so difficult to get a violator's rights restored," Smith wrote.

"We want it to be easy for a former violator to know that they're still allowed to change and share the software; if they stop distribution because of legal uncertainty, fewer people will have free software in the long run," he explained. "Hence, we created new termination provisions for GPLv3. These terms offer violators a simple method to earn back the rights they had."

Because of that and for other reasons, "we urge developers who are releasing projects under GPLv2 to upgrade to GPLv3," Smith concluded. "Companies that sell products that use Android can help out by encouraging the developers of Linux to make the switch to GPLv3."

'Is It Fair to Use FUD?'

Smith's piece sat quietly for several days before Linux bloggers noticed it. Once they did, however, there was no stopping them.

"FSF uses unproven compliance issue to promote GPLv3," was the charge over at ITworld, for example. "Is it fair to use FUD to promote GPLv3 over GPLv2?"

"New GPL licence touted as saviour of Linux, Android," was the wry observation at The Register.

Linux Girl's Debate-o-Meter started screaming soon afterward, so she headed straight for the blogosphere's Punchy Penguin Saloon to learn more.

'Inviting Abuse'

"GPLv3 Linux for Android? That's so bad it's not even wrong!" began Barbara Hudson, a blogger on Slashdot who goes by "Tom" on the site.

"Mobile phone manufacturers don't make different silicon for each market -- instead, they customize the device's software so that the phone can be type-approved by each country individually," Hudson explained.

A GPLv3 Android phone, however, "with all the decryption keys available to any user on demand, is just inviting abuse," Hudson opined. "No manufacturer would make such an insecure-by-design device, and no telco would put the stability of their network at such risk."

'You Automatically Receive a New License'

In addition, "despite the article's claim of 'permanent termination,' it's very easy to get a new license to redistribute a GPLv2 program -- just download or otherwise get a new copy, as per section 6 of the GPLv2, and you automatically receive a new license grant, which is valid as long as you are in compliance," Hudson pointed out.

"While this doesn't 'whitewash' any problems that arose under the old license grant, it's clear that the new license cannot have additional restrictions, such as a past license termination, imposed on it," she added.

Hudson actually went so far as to contact Smith, the article's author, to point out "these and other issues," she told Linux Girl. "He insists that once a license is terminated, that's it.

"However, the law is clear: in all 'take-it-or-leave-it' contracts such as the GPL, the contract must be interpreted in the recipient's favor (contra proferentem)," Hudson pointed out. "Companies in compliance have the legal right to rely on the grant provided by section 6 of the GPL."

'People Acting Like the GNUstapo'

Ultimately, "this is not about Android and Linux licensing, but about pushing an agenda," Hudson concluded. "The usual suspects have made Android and Linux licensing a hot issue. The last thing we need is people acting like the GNUstapo and adding fuel to the fire."

Meanwhile, "it's a safe bet Google (Nasdaq: GOOG) is working on a BSD-hosted version of Android as a fallback," she added. "If I were them, that's what I'd be doing."

Roberto Lim, a lawyer and blogger on Mobile Raptor, saw a different problem with GPLv3.

The Anti-Tivoization Effect

"I do not see anything wrong with the new termination clauses in GPLv3," Lim told Linux Girl, "but there is one issue in GPL version 3 which I think should be considered seriously."

Namely, whereas companies that make devices running GPL software "can use digital rights management technologies to make sure that the device will only function with its official software" -- known as "Tivoization" -- "GPLv3 wants to prevent Tivoization and force Mr. Manufacturer to allow end users to modify the software installed on the appliance or device for their own purposes without restriction," Lim pointed out.

"This is effected by requiring that the source code be accompanied by any activation keys or methods which would allow the end user to run modified software on his device," he added.

'GPLv3 Will Place Them at Risk'

"Maybe a few years ago that would have been a good idea -- that was back when we would pay full price on our hardware," Lim suggested. "But we are moving into an area where manufacturers are now also service providers. Like Amazon's (Nasdaq: AMZN) coming tablet, they might subsidize the cost of new hardware in exchange for the profits they expect to make from after-sales profits from software and services."

GPLv3, then, "will place them at risk of having their subsidized hardware used for purposes other than intended and may make Linux adoption harder," he asserted.

"Is the point of GPL to allow others to build on your achievement and make entirely new things, or is it to allow users to tinker with the devices themselves? I think it is the former," Lim concluded, "and GPLv3 benefits a very small, but very noisy, segment."

'Simply Too Much Work'

Consultant and Slashdot blogger Gerhard Mack saw yet another problem.

"The FSF forgets that for many cases, changing the existing license is impossible," Mack said.

"To switch the license, each contributor with code still in the project must agree to a change, and some projects are now so large that it's difficult to find everyone," he pointed out. "Replacing the code of everyone who can't be found is simply too much work. In the case of some projects, some of the original authors are even dead."

Chris Travers, a Slashdot blogger who works on the LedgerSMB project, didn't see anything new in the FSF's pro-GPLv3 effort.

"They have been doing this since the GPLv3 came out," he explained. "Anyone who bought the FSF's line has probably already switched. Those of us who don't like things in the GPLv3 will stick with the GPLv2 and ignore them."

'They Better Hope Oracle Wins'

For Slashdot blogger hairyfeet, however, the FSF's effort is too little and too late, he told Linux Girl.

"The FSF and even Torvalds don't own FOSS anymore -- Google does," hairyfeet explained.

"If Google were to fork the kernel tomorrow, how many developers do you think would follow it?" he asked. "Sadly, as much as FOSS users like to go, 'boo hiss' at Oracle (Nasdaq: ORCL), they better hope Oracle wins against Google.

"If they lose? Well, it's not a mystery why Google doesn't allow GPLv3 anywhere near Android," he added. "As one of the guys at Google said, 'Android is open FOR OEMS'; the unspoken part of that is, 'and not for you, silly user!'"

'FSF Is Right to Push for GPLv3'

Indeed, "one of the few differences I have with Google and Android/Linux is the crummy license Google chose," blogger Robert Pogson told Linux Girl. "They could have dashed off a typical platform of GNU/Linux with the GPL but chose other software just to avoid the GPL.

"Now look what's happened: They've closed a lot of the source on Android 3.x and no one knows where they stand," Pogson explained. "Can Samsung trust Google now that Google's bought Motorola? How open is the Open Handset Alliance now that some of the code is closed? This is a huge mistake, and FSF is right to push for GPLv3 or better."

In fact, "we should be making software and hardware, not fighting over the code," Pogson opined. "By forking Linux and closing the code, Google is making the old guard look good. At least they were consistent."

Pogson "can forgive Google for not thinking this through from the beginning," he added. "Who knew how big Android/Linux would get?

"The world needs good IT, but this is not the way to do it," Pogson concluded. "Being consistent and sticking with FLOSS would have prevented so many problems no one needs."

Computing » WikiLeaks Stews in Its Own Juice

Posted by echa 1:52 PM, under | No comments

Computing » WikiLeaks Stews in Its Own Juice The tables were turned on WikiLeaks when a massive amount of highly sensitive and confidential diplomatic cables it was sitting on became exposed online. "WikiLeaks is the perfect example of thieves stealing from thieves," said Prem Iyer, head of the information security practice for Iron Bow Technologies. "All the info that they stole from others, they decided to store online -- and the password was leaked."

Another global security mess is in the making, on the heels of the publication of thousands of sensitive security documents obtained by WikiLeaks. However, in this particular instance, WikiLeaks insists it didn't mean to do it.

Last week, WikiLeaks reportedly made some 134,000 diplomatic cables available. Unlike earlier disclosures, though, these cables were published with the names and identities of confidential and sensitive sources fully intact.

WikiLeaks blamed UK publication The Guardian for the dump, explaining that the encrypted file containing the cables had been online, but secure -- that is, until a journalist released the password in a password-decryption book published by the paper.

Knowledge of the leak has been spreading online for months, according to WikiLeaks, but only recently has it reached critical mass.

"For the past month, WikiLeaks has been in the unenviable position of not being able to comment on what has happened, since to do so would be to draw attention to the decryption passwords in The Guardian book," reads a WikiLeaks editorial.

With the connection publicly made, WikiLeaks says it can speak about the matter now. The site has begun pre-litigation action against The Guardian and an individual in Germany it accuses of distributing the passwords for personal gain.

The Guardian has rejected WikiLeaks' claims that it is responsible, and it has called on the site not to release the remaining cables.

The E-Commerce Times received no replies from WikiLeaks, The Guardian or the U.S. State Department to its requests for comments.

Lessons Learned

This episode would be downright amusing if the stakes weren't so high. Potentially, lives could be at risk. WikiLeaks carved out a place for itself in the global political arena by leaking sensitive information that supposedly was secure, and now it has been tripped up in similar fashion.

"WikiLeaks is the perfect example of thieves stealing from thieves," Prem Iyer, head of the information security practice for Iron Bow Technologies, told TechNewsWorld. "All the info that they stole from others, they decided to store online -- and the password was leaked."

Despite the unique circumstances of the leak, the players and the ramifications, there are several themes common to more mundane leaks. Observing and learning from them could help a company avoid its own corporate disaster.

"WikiLeaks learned that securing sensitive data online can be more difficult than it realized, between ever-growing sophistication of hackers and human errors," Iyer said.

Dangers of the Cloud

Any company or government agency that is looking to store data online must realize that cloud solutions are at risk of attack.

"You cannot assume that the proper security controls are in place," warned Iyer.

"Organizations who are considering cloud solutions must understand the security mechanisms that the cloud provider has in place," he advised, "and then determine if public cloud is still an option or if a private cloud solution would be a more secure alternative."

Overprivileged and Accident-Prone

Another oft-cited reason for inadvertent disclosures is the generous granting of administrative privileges to people who don't need them, Brian Anderson, chief marketing officer of BeyondTrust told TechNewsWorld.

"You might have a secretary who has admin privileges and she accidentally copies a sensitive file and emails it to an entire client list. That has happened," he said.

The point is that companies need to protect their systems not only from people intent on stealing information -- for greed or other reasons -- but also from people who are sloppy with their security practices, explained Anderson.

"Set systems so they grant the least privilege access -- only what a particular individual needs and nothing more," he advised.

The Peril of Writable Media

The motherlode of WikiLeaks' sensitive cables came to it via U.S. Army intelligence analyst Bradley Manning, who allegedly downloaded the purloined material and handed it over to the site.

The propagation of mobile devices and writable media such as USB devices and read/write CD/DVD drives has led to an increase in productivity across organizations, but has provided an increased threat from malicious insiders, John Sennott, director of marketing for Prism Microsystems, told TechNewsWorld.

Companies need to recognize the potential threat these devices can have for their security and adopt a concept of "trust but verify," he said. "It is important to let the users know they are being monitored, so they are afraid of getting caught if a policy is not followed."

Computing » Mozilla Targets Tablets With New Browser Designs

Posted by echa 1:39 PM, under | No comments

Mozilla Targets Tablets With New Browser Designs | Mozilla Targets Tablets Tablets have caught the interest of browser maker Mozilla, which is polishing up a new version of its Firefox browser for Android tablets. Previews show a tablet browser with many elements that will be familiar to users of the company's desktop version. However, users generally don't yet seem to be as choosy about the browsers on their mobile devices as they are about the browsers on their desktops.

The Mozilla Foundation is enhancing the tablet version of its Firefox browser.

It's leveraging Android Honeycomb but retaining familiar visual elements of Firefox such as the signature big back button and distinctive tab shape, according to a blog post by Ian Barlow, who works on Mozilla's mobile user experience team.

Some of the UI elements that were tucked away on Firefox for smartphones to maximize screen space have been brought back in the version for tablets.

However, it's not quite clear which version or versions of Android Mozilla is working on \u2013 an earlier post on the Mozilla Mobile blog that talks about the enhanced tablet experience says the browser is now better integrated into Android Gingerbread, the version of Android that predates Honeycomb, the version meant for tablets.

The Mozilla Foundation did not respond to requests for comment by press time.

Firefox for the Tablet

Firefox for Honeycomb has an Awesomebar that uses the same tabbed menu as the desktop version to offer rapid access to bookmarks, history and users' synced desktop activity, Barlow stated.

Some tabs that were hidden on the smartphone version of Firefox have been brought back onto the screen and given some extra juice.

For example, in landscape mode, users see tabs in a persistent bar on the left-hand side of the screen that they can access with their left thumb while scrolling through Web content with their right hands.

In portrait mode, this tab bar becomes a menu item at the top of the screen.

Last week's blog post on Firefox mobile said Mozilla has optimized fonts and included interface elements and buttons, among other things, in the tablet version of the browser. It also promised further enhancements in the future.

Also, it said the mobile browser offers crisper text, faster rendering and less pixilation when zooming.

However, the post added that Firefox is better integrated into Android Gingerbread, which is the predecessor of Honeycomb, the first version of Android optimized for the tablet.

Yet another post in the Mozilla mobile blog said Firefox for Android adds support for developer tools.

It has a single-touch events application programming interface. This lets devs build Web experiences that detect touch events and gestures. Support for multitouch events will be added in future.

Another API is the IndexedDB API. This gives devs local database storage in Firefox so they can make Web apps and websites available offline.

Looking Back Darkly?

Why Gingerbread? Or is Mozilla supporting both Gingerbread and Honeycomb on tablets? Or could someone have made an error somewhere?

"I think that's very curious," Chris Hazelton, a research director at the 451 Group, told LinuxInsider.

"It could be that Gingerbread was easier to work with, they had greater access to it, or there were less patent liabilities around Gingerbread," Hazelton added.

Several companies, including Microsoft (Nasdaq: MSFT) and Oracle (Nasdaq: ORCL), have filed patent suits against both Google (Nasdaq: GOOG) and Android separately.

Where's Firefox Mobile Going?

Mozilla had previously offered a mobile version of Firefox, named \u201cFennec,\u201d for Windows Mobile.

However, the advent of Windows Phone 7 apparently killed off efforts in that direction, according to Fennec team member Alex Pakotin.

Mozilla unveiled Firefox 4 for Android and Maemo in March after putting it out in beta in October.

And, according to Mozilla's product vision statement, the organization will focus strongly on mobile devices, including capabilities such as multitouch, notification, 2D and 3D graphics, audio and video.

However, whether Mozilla will succeed in the mobile market is open to question.

"It's hard to claim that there's a vibrant third party market for mobile browsers," Carl Howe, director of anywhere consumer research at the Yankee Group, told LinuxInsider.

"Most smartphones come with browsers, and they are pretty good; even the BlackBerry now uses a Webkit-based browser, and it's pretty hard to dislodge the incumbents," Howe added.

"I don't see how Mozilla could be successful when it has a number of problems in mobile," the 451 Group's Hazelton stated.

"Mozilla haven't been able to get early approval of their mobile browser for iOS, and they haven't got a huge penetration in the Android market because users are not yet that picky about the browser," Hazelton explained.

Computing » Big Mango Falls From HTC's Tree

Posted by echa 1:36 PM, under | No comments

Computing » Big Mango Falls From HTC's Tree HTC is showing off its first smartphones featuring the Windows Phone Mango update: the Titan and the Radar. Both will be released in select markets next month. The Titan sports a 4.7-inch screen, which is significantly larger than most other smartphones' screens. This follows Samsung's recent release of the 5.3-inch Galaxy Note. Are we witnessing the rise of the Superphones?

HTC has shown off its first two smartphones running Mango, Microsoft's (Nasdaq: MSFT) upcoming update for its Windows Phone mobile operating system.

The new devices, the Titan and the Radar, are being shown to some consumers in London, Paris, Madrid and Berlin.

The Titan has a 4.7-inch display -- much larger than a typical smartphone's display and more than 30 percent larger than the iPhone 4's 3.5-inch screen. The Radar's screen is smaller at 3.8 inches.

At a time when so many smartphones look so much alike, a particularly large screen could make the Titan conspicuous to consumers, if nothing else.



"Every form factor has been tried, and now it looks like manufacturers are going to larger and larger smartphones," Allen Nogee, a research director at In-Stat, told TechNewsWorld. "When almost all smartphones look exactly the same and run the same applications, how do you set your product apart from the crowd?"

HTC did not respond to requests for comment by press time.

Both the Titan and the Radar will be broadly available worldwide from October, starting in Europe and Asia.

Tech Specs for the Mango Devices

The Titan and the Radar both have the standard front and rear cameras and can shoot 720p HD videos. Both have a dedicated hardware camera button that lets users take photographs without having to unlock the phones.

They both come with the HTC Watch video service, which was introduced in April on the then-newly launched HTC Sensation 4G.

Watch is an application and service that provides access to the latest premium movies and TV shows.

Both the Titan and the Radar offer access to Microsoft's Zune music service and have Virtual 5.1 surround sound.

They both have HTML5 support and let owners access Microsoft Xbox Live. Both also provide the usual access to social networking services.

The HTC Titan has a 4.7-inch LCD screen, an ultra-slim 9.9mm curved brushed aluminum shell, and built-in Microsoft Office Mobile. It has a Qualcomm (Nasdaq: QCOM) Snapdragon 1.5GHz processor and is a 3G device.

The HTC Radar is also a 3G smartphone. It has a Qualcomm Snapdragon 1GHz processor.

The Sense-less HTC Smartphones

Computing » IBM to Build Super-Storage PhenomNotably, neither the Titan nor the Radar has the HTC Sense graphical user interface, which HTC developed for mobile devices running Windows Mobile, Android and Brew.

That could be an attempt by Microsoft to exert some control over its new mobile phone system.

One of the issues that plagued Windows Mobile, the predecessor of WinPho7, is that each smartphone manufacturer put its own UI on top of the operating system, resulting in a fragmentation of the market and a lack of interoperability among Windows mobile phones.

"I think Microsoft is clamping down on fragmentation," IDC's Stofega opined. "They learned with Windows Mobile 5 and 6 that you need some sort of control lever to make sure people don't do things that are ultimately not good for the operating system."

A Brave New World

The Titan isn't the only smartphone with an enormous screen.

Samsung launched the Galaxy Note Thursday. This has a 5.3-inch super AMOLED screen, and Samsung calls it a new type of device.

Perhaps the smartphone market is reversing its original trend, wherein devices kept shrinking in size -- or perhaps this is a whole new direction for the market.

"I think we're starting to see a new category of mobile devices emerging," Will Stofega, a program director at IDC, told TechNewsWorld.

"You might say that we're seeing a new version of the smartphone, one we might consider to be a superphone, which may look clunky compared to normal smartphones but will be a little more portable than a tablet," Stofega added.

Related Posts Plugin for WordPress, Blogger...