Saturday, 3 April 2010

Top broadcast journalism prize goes to... a website?

When most folks think about George Foster Peabody awards for distinguished broadcast journalism, famous recipients like Edward R. Murrow or the Frontline TV series come to mind. Certainly the vast majority of Peabody picks are either people or programs. But the latest winners, announced on Wednesday, include a website: National Public Radio's "topically boundless counterpart," as Peabody calls it. Everybody else knows the site as npr.org.

"A whole lot of things considered, from 'South Park' to North Korea, make this one of the great one-stop websites," Peabody adds. Needless to say, the suits over at the service are tickled pink by this prize.

"For all of us, today's awards speak to NPR's ability to adapt and grow while continuing to tell stories and create new online features that serve your needs and interests," NPR Vice Presidents Ellen Weiss and Kinsey Wilson declared on the network's blog.

Indeed, npr.org is quite something. You can get the latest news, or hourly news, access a slew of music, listen to concerts, tune into all your favorite NPR shows, follow dozens of interviews, stick NPR widgets on your desktop, listen to great features about culture, find your local NPR station, or growl at NPR's ombudsman, Alicia Shepard, all on one easy to navigate portal.

And the entire site comes—dare we say it—without commercial Web ads (save for NPR donation widgets and the NPR Shop, of course).

But this award reflects more than the appeal and usefulness of NPR's website. We think it's recognition that NPR represents one of broadcast radio's few success stories over the last decade.

Radio survivor

The master narrative of radio was brutally but accurately summarized by a Boston Globe market survey report in 2005: "On Demand Killed the Radio Star—How Satellite Radio, the Internet, Podcasting and MP3 Players Are Changing the Terrestrial Radio Landscape." Add to that the foolhardy broadcasting mergers which followed the Telecom Act of 1996, mix in this nasty recession, and you wind up with train wrecks like Citadel media, the nation's third biggest broadcaster, whose shares closed at 1.6 cents last December following the company's bankruptcy announcement.

NPR has not only survived all this, but compared to the rest of the pack, the network is thriving. The service saw a huge boost in its listenership during the 2008 election—a 7% jump, bringing the audience to 27.5 million listeners weekly. At present, a third of the nation's FM radio stations are classified as "educational," and over 900 of those are either NPR affiliate stations or run some NPR programming.

On top of this, NPR has pursued an aggressive Internet and mobile broadband strategy, with a terrific Public Radio Player for the iPhone, which now has 2.5 million subscribers. You can reach hundreds of public radio stations with the app, picking and choosing which show you want to listen to and when, or opting for demand streams that can be accessed at any time.

Plus there's a mobile NPR.org (m.npr.org), and an NPR news app for the Android. It allows for backgrounding, so users can access other applications while tuning into headlines. And of course since it's an Android, the code is open source.

And last week, NPR released a new read-write API that will allow other media services to post content to NPR as well as receive it. The first participants in this experiment include Oregon Public Broadcasting and the Northwest News Network, followed by KQED in San Francisco, WBUR in Boston, and WXPN in Philadelphia.

More to come

Such is NPR's zeal for keeping up with the cyberJoneses that the network is promising that it will be completely iPad compatible when said famous device is released by Apple on April 3.

"From day one, iPad users who visit the NPR website will get an experience that is optimized for the device," the Inside NPR blog pledges. "Features like the NPR audio player have been given greater visibility and adapted for the unique technical requirements of this new platform; we've modified the navigation and made the site more 'touch' friendly; and we've improved the sponsorship experience—all without changing the main site."

Proactive, adaptive, and imaginative, public radio is keeping radio alive. Congratulations to npr.org.

Ubuntu 10.10 to be codenamed Maverick Meerkat

Ubuntu 10.04, codenamed Lucid Lynx, is scheduled for release this month. The developers at Canonical, the company behind the Ubuntu Linux distribution, have already started the process of planning for the next major release. Founder Mark Shuttleworth revealed today in a blog entry that Ubuntu 10.10, which is scheduled to arrive in October, will be codenamed Moribund Moth Maverick Meerkat.

Ubuntu 10.04 is a long-term support release, which means that the focus during the current development cycle has largely been on stabilization and refining the existing technology. Shuttleworth says that we can expect to see a return to experimentation in the 10.10 release, with the potential for some radical changes.

Some of the most important goals include delivering a new Ubuntu Netbook Edition user interface, improving the Web experience, boosting startup performance, and extending social network integration on the desktop. Shuttleworth also hopes to advance Ubuntu's cloud support by simplifying deployment and making it easier to manage cloud computing workloads.

The meerkat was chosen as the mascot for the new version because the creature embodies some of the key values that will influence the coming development cycle.

"This is a time of change, and we're not afraid to surprise people with a bold move if the opportunity for dramatic improvement presents itself. We want to put Ubuntu and free software on every single consumer PC that ships from a major manufacturer, the ultimate maverick move," Shuttleworth wrote in the announcement. "Meerkats are, of course, light, fast and social—everything we want in a Perfect 10."

Canonical's staff, Ubuntu contributors, third-party application developers, and members of various upstream communities will gather in Belgium next month for the Ubuntu Developer Summit (UDS), an event that takes place near the beginning of each new Ubuntu development cycle. This event provides a venue for planning the details of the next major version of the distribution. More specific details about the Maverick roadmap will be available after the event.

Ubuntu 10.10 will coincide with the launch of GNOME 3, a major overhaul of the open source desktop environment that provides significant parts of the Ubuntu user experience. Shuttleworth's statements about bold moves and opportunities for dramatic improvement suggest that we could potentially see Ubuntu adopt the new GNOME Shell if it proves suitable. It's possible that we could also see the new default theme evolve and benefit from experimental features that were deferred during this cycle, such as RGBA colormaps and client-side window decorations.

The upcoming 10.04 release is looking really impressieve. Users can expect to see even more progress as the Malodorous Mongoose Maverick Meerkat begins to take shape. As usual, we invite our readers to share their most humorous alternate codename suggestions in the discussion thread.

IBM initiative aims to hook startups while they're young

Vendor lock-in gets a bad wrap, especially when it comes to the cloud. Users may complain about it, and IT administrators may eye cloud platforms with distrust on account of it, but lock-in is one of the core tradeoffs that clients make in return for access to scalable, flexible cloud services. And that lock-in provides some security for service providers who are taking on the considerable infrastructure cost that building a cloud platform entails. That’s why IBM is now cultivating lock-in by adopting a version of the same strategy Microsoft used in the '80s and '90s to establish the Windows and Office monopolies—give away the product (in Microsoft’s case, by turning a blind eye to rampant piracy), so that your user base is locked in by the time you get really serious about charging.

That looks to be the motive behind IBM’s Global Entrepreneur initiative, which promises early-stage startups free use of specific IBM cloud services, as well as access to the kind of sales, marketing, and technical expertise that Big Blue’s growing and hugely successful services arm typically charges big bucks for. Check out the roster of goodies for startups that are accepted to the program:

Under the new initiative, start-ups can for the first time:
  • access IBM's software portfolio through a cloud computing environment, including IBM industry frameworks to accelerate software development;
  • work side-by-side with scientists and technology experts from IBM Research to develop new technologies;
  • take advantage of dedicated IBM project managers to assist in product development;
  • attend new IBM SmartCamp mentoring and networking workshops with VC firms, government leaders, academics, and industry experts at the global network of 40 IBMInnovation Centers to build business and go-to-market plans;
  • tap a new social networking community on IBM developerWorks to connect with other entrepreneurs and more than eight million IT professionals from around the world.
And then, when your startup grows up, it will be very hard—if not completely impossible (more on this below)—to ditch IBM's platform for someone else's.

To qualify for the program, startups will need to be less than three years old, privately held, and "actively developing software aligned to IBM's Smarter Planet focus areas." IBM is partnering with 19 associations and VC groups in different parts of the world in order to identify startups and attract them to the initiative.

All told, anything that gets promising, early-stage companies to build their software directly to IBM's cloud is great for IBM—and the fact that these companies will also get a free taste of IBM's consulting services is an added bonus. But is it good for the startups?

The answer to that latter question depends entirely on how good IBM's cloud offerings are, and not so much on the fact of the lock-in itself. That's because lock-in is a defining feature of the cloud landscape, and when you decide to use cloud services either as a consumer or a company, you have to go in with your eyes open.

Lock-in goes with the territory

The issue of lock-in comes up enough in discussions of the cloud that it's worth recapping how it works for readers who don't follow the topic as closely.

In a nutshell, cloud services are offered at three levels of abstraction, and the higher up you go on the abstraction ladder, the more you're locked in to a specific vendor's offering.

The lowest level with the least lock-in is infrastructure-as-a-service (IaaS). A great example is Amazon's EC2, which lets you cheaply and quickly get metered access to any number of virtual Windows or Linux machines. If at some point you decide you don't like EC2, you could always host identically configured VMs on your own in-house hardware, and ditch Amazon's platform entirely.

At the next level up are platform-as-a-service (PaaS) offerings like Google App Engine. It's at this level that the real lock-in starts. If you build our application on App Engine, then it's an App Engine application. And if Google's platform goes down, as it has done once already this year, then so does your app. Or, if you decide you hate Google and want to switch, you'll have to rewrite the app.

At the topmost level of abstraction is software-as-a-service (SaaS), the most commonly cited examples of which are Salesforce.com and SugarCRM. These are cloud applications that you pay to access, and they've got your data siloed away in their cloud. For some SaaS apps, like Google Docs, you could conceivably get your data back out, but it's a pain. SaaS platforms are designed to ingest data and keep it, not to spit it back out in an easily portable format (though there's a movement afoot to change that).

Ultimately, lock-in will be a prominent feature of the cloud landscape from here on out, and more companies will follow IBM's strategy of actively targeting early-stage software startups in order to hook them on a specific platform. This isn't necessarily a bad thing or a good thing—it's just one more technological trade-off to juggle. So it's up to cloud users to educate themselves about the amount of lock-in that they'll be subject to when they commit to any cloud platform, and to factor that into their decision. If lock-in is a serious concern for you, then you'll have to be extremely careful about the kinds of cloud services you pick, and about how you use those services.

FCC photos reveal iPad internals, sculpted aluminum case


The Federal Communications Commission beat iFixit to the punch in publishing the first iPad take-apart photos, although the Commission did have an unfair advantage since it got pre-launch access to the device for its usual RF testing. The photos do give a first look at the iPad's laser sculpted aluminum casing as well as a little detail about how the hardware is put together.

What's not surprising is that most of the internal volume us taken up by two large Li-Ion batteries. The logic board is tiny and appears to be not much bigger than an iPhone. All of the internal components are jammed in there good and tight, as one might imagine. But what's most surprising is that there is actually a good amount of empty space inside.


Lest you think iFixit took this challenge lying down, however, the company craftily removed the FCC's meager attempts to cover up details of the chips that Apple requested the FCC keep "confidential." iFixit analyzed the source of the components, but none of them are major surprises so far. There's an Apple A4 processor, Toshiba flash memory, and Broadcom radio chips. The IPS display panel is also suspected to be made by LG Phillips.

Some of the components are too small to make out in the relatively low-resolution images, and some of the components might be slightly different in the actual shipping version. iFixit promises have a more detailed analysis after receiving its own iPad.

Friday, 2 April 2010

Report: Apple purchases another processor design house

Apple's gigantic bankroll may be burning a hole in its pocket. Almost two years after purchasing PowerPC designer P.A. Semi, Apple appears to have snapped up ARM design house Intrinsity. According to a report that first appeared on EDN (via electronista), a number of engineers at the company have indicated that they are now or soon will be employed by Apple. Some of them have even gone as far as to change their LinkedIn profiles, with one reverting it—possibly out of fear of drawing the wrath of his new, secretive employer.

Intrinsity is known for its expertise with ARM processors such as the one used in the iPad. The design house has always been fairly quiet about its client list, so it's quite possible that Intrinsity contributed as much, if not more to the A4 ARM processor that powers the iPad than did P.A. Semi. Intrinsity has done work on customized Cortex A8 processors for the likes of Samsung, so the company's expertise in the area would be extremely attractive to Apple.

If the sudden disappearance of Intrinsity's web site is truly an indication that Apple has made another purchase, it's a clear sign that Cupertino has really big plans for ARM and doesn't see a future for x86 outside of its desktops and laptops. In addition to powering iPads and iPhones, it's possible that we could see Apple-created ARM chips in other consumer devices—even HDTVs—if Apple wants to try and stake out more consumer electronics turf.

How the fake "iCade" could become a reality for the iPad


Games are big on the iPhone—the majority of apps on the App Store are games, and games are regularly among the top-selling paid and free apps. With the iPad, gaming on Apple's mobile devices is poised to get even bigger (pardon the pun). But even with the touchscreen and accelerometer inputs, some games just need more traditional D-pad or joystick controls. The question is, why aren't third-party accessories available to give us this control?

Apple itself may be getting into the gaming accessory business, if the details of a recently published patent application are any indication. A group of emulator enthusiasts has already started limited production of a similar accessory for the iPhone and iPod touch. And, a fake gaming accessory from ThinkGeek has caused a major buzz on the Internet, enough that the company may be considering turning it into a real product.

Apple has filed a patent application, published today, for an "Accessory for Playing Games with a Portable Electronic Device." The patent was originally filed in September 2008, so clearly Apple has given thought to this issue. The patent describes a device that a "portable electronic device," such as an iPhone or iPad could slip into, and offer a user "a plurality of input controls that may be actuated by a user while playing a game." The illustrations that accompany the patent clearly show an iPhone-like device connecting to the accessory via a dock connector.

The accessory could include any of the following: buttons, D-pad, joystick, a keypad, microphone, camera, speakers, and even a secondary display. The device could also offer a number of gaming related features, such as force feedback or streaming of video or audio to external devices. Apple also suggests it could include integrated memory for storing your "game progress."

Apple isn't the only one to be thinking along these lines. In May 2008, a group of hackers modified a SNES controller to work with a jailbroken iPhone over a serial connection, and built a prototype of what is now known as the iControlPad. The device is akin to a bulky iPhone case that adds a D-pad and four buttons that could be used to control games.

Over the last two years a few prototypes were made as the iControlPad was refined, and now a finalized version is being manufactured in limited quantities in the UK. One version even includes a built-in rechargeable battery to power an iPhone or iPod touch for "marathon gaming sesions." (I would buy one just to play Space Miner without having to recharge every eight hours.) There is also an SDK that developers can use to add support to the games they make. The only problem is that it will only work with games made for jailbroken iPhones or iPod touches—the numerous games from the App Store just won't work.

The problem isn't technological, though. Apple added an API in iPhone OS 3.0 to let accessories connected via the Dock connector to communicate with an app. Unfortunately, making such an accessory means a company must join Apple's "Made for iPod" program, which means negotiating a licensing fee to gain access to both the API as well as the necessary chips that must be included to "authorize" accessories to connect to an iPhone, iPod touch, or even the iPad. The cost alone can be prohibitive, and only a few accessories have been produced so far.

Another roadblock is that the current implementation of the accessory API limits a device to communicating with only one app. Even if App Store developers wanted to add iControlPad support to their games, so far they wouldn't be able to. (This misstep doesn't just affect gaming accessories, either—generalized sensors could be made for a variety of scientific applications, for instance.)

These roadblocks could also prevent products such as ThinkGeek's iCade from ever becoming a reality. The iCade purports to be a miniature classic arcade cabinet that you slide an iPad into. The cabinet includes stereo speakers and a classic joystick and buttons, which, when coupled with the iPad's large screen, would make for the ultimate retro gaming device. The iCade would interface via the Dock connector with an iPad version of the MAME arcade emulator, allowing anyone to live out their own King of Kong kill-screen fantasies.

Though the product is actually a clever (and cruel) April Fool's joke, Ty Liotta, ThinkGeek's merchandising manager and head of custom product design, told Ars that it could very well end up being a real product. "People would really like to buy it, and we have had many people e-mailing and requesting it be created," Liotta said. "Our customers know we have turned April Fools items into real products before, so they know there is the potential there."

Liotta was referring to one of last year's April Fools products, a sleeping bag that claimed to be made of the "exact synthetic compounds needed to re-create Tauntaun fur," ostensibly to keep you warm even in temperatures a low as those on the ice planet Hoth. ThinkGeek got so many requests to make the product that it secured a license from LucasFilm and had them manufactured. (Ars art director Aurich Lawson counts himself among the many proud owners.)

Though there are software licensing issues and the current limitations of the accessory API for iPhone OS, Liotta is hopeful that ThinkGeek could work with Apple and perhaps game developers to make the product a reality. "I did see a number of different products when I was at CES that use the Dock connector in all kinds of different ways," Liotta told Ars. "I didn't think we could convince Lucasfilm to let us make the Tauntaun sleeping bag and obviously that worked out."

Over 2,000 customers have signed up on ThinkGeek's website to be alerted whenever the iCade is available to order, and one customer even offered to fund 100 percent of the development costs to make the iCade a reality. Along with production of the iControlPad, it's clear that gamers are willing to pay to enhance the gaming experience on Apple's mobile devices.

Apple went through the trouble of filing a patent on a gaming accessory, but so far hasn't produced one of it's own. If Apple isn't interested in making one, it should be working with product developers to make building these accessories feasible and allowing developers to build support into their games.

Universal Service Fund: now with less incompetence!

The Federal Communications Commission's Universal Service Fund is cleaning up its act. Yes way—for real. And not only that, it looks like we've been a tad unkind to the benighted program in the past. Turns out that what seemed like a pretty devastating audit of one of the USF's main programs was way off in its calculations.

Here's the short version of that story. The USF, paid for by small tithes on your phone bill, runs four programs: a fund that subsidizes the phone bills of the poor; a program that subsidizes the computer/network needs of schools and libraries; another that underwrites broadband for rural health care facilities; and a division that offers financial support to rural carriers.

That last program is called the "high cost" fund. It helps with the challenges that rural carriers face in trying to provide service to relatively few consumers in spread out areas. Unfortunately, past audits of the fund have concluded that its high cost title has a second, less desirable meaning—a scarily huge error rate in payouts to carrier recipients: 16.6 percent, according to a review that the FCC's Inspector General released three years ago. A subsequent assessment warned that the program overpaid carriers by almost a billion dollars from July 2006 through June 2007.

But the Universal Service Administrative Company's new Annual Report includes a re-check of those numbers that calls them way too high. Not 16.6 percent for that first assessment, USAC says, just 2.7 percent. "USAC anticipates similar results in the final reports on the second and third rounds of the FCC OIG USF audit program," the Annual Report also notes.

Big fixups

Still, the document indicates the company has taken to heart many of the criticisms of the ways that it monitors its four programs. In mid-July of 2008 the Government Accountability Office warned that the FCC has not established meaningful performance goals for the USF. But the GAO reviews' most important finding was that nobody really audits the cost records of these telcos for, well, costs. FCC and USF data collection efforts only peer at a small percentage of recipients, GAO charged, and "generally focus on completeness and consistency of carriers' data submissions, but not the accuracy of the data." This could "facilitate excessive program expenditures," the report very politely concluded.

Now, in 2010, USAC will take a new approach, the company promises, "analyzing data from beneficiaries and from USAC to measure rates of improper payments and using a broad audit program to measure program compliance." The FCC has also established an interim cap on high cost fund payments to competitive carriers. And the low income program is completely revamping itself, with a new cost-tracking system to reduce accounting errors.

All this is good news, because the Universal Service Fund could become a huge engine for the expansion of broadband in the United States. Last year the USF paid out $7.3 billion to its recipients—money going out to 1,865 eligible telecommunications carriers in the case of the high cost program. But the balance of that cash went to phone service providers, not to ISPs.

So the FCC's National Broadband Plan recommends that Congress transitions USF money to two new programs. First, a Connect America Fund to support broadband providers for poor and rural regions. The CAF, as outlined in the Plan, is designed to avoid the errors of High Cost. It will only provide funding in zones "where there is no private sector business case to provide broadband and high-quality voice-grade service." The program will give to no more than one provider per area (as opposed to over a dozen in some present instances). Its recipients will be adequately audited. And, of course, they will have to provide broadband.

Second, the FCC wants Congress to launch a "mobility fund" to help various states get up to speed in 3G wireless.

All this could take a while for the House and Senate to get out the door. The FCC says this transition needs to happen by 2020, with reforms of High Cost and disbursements from Connect America both beginning in 2012. But the agency isn't waiting for Capitol Hill to get started. The Commission's next meeting, scheduled for April 21, will propose "common-sense reforms to the existing high-cost support mechanisms to identify funds that can be refocused toward broadband"—plus a Notice of Inquiry that asks for input on "the use of a model to determine efficient and targeted support levels for broadband deployment in high-cost areas."

Let's see how far the USAC and FCC can get on their own while waiting for Congress to take the big steps. Hopefully they won't have to tread water for too long.

Thursday, 1 April 2010

Apple reportedly tweaked the iPhone to work better on AT&T

Since the original iPhone launch, AT&T has put in motion a number of upgrades to its wireless network to accommodate the pounding it received at the collective fists of millions of iPhone users. But according to AT&T CTO John Donovan, Apple has also done its part to adjust the iPhone to work better on AT&T's network.

Donovan told the Wall Street Journal that, even as the company worked to convince Apple that it was improving its network, AT&T engineers went to Apple to give Apple's engineers a "crash course" in wireless networking. Apple modified how the iPhone communicates with towers to reduce the overhead for making connections or sending texts.

"They're well past networking 101, 201 or 301," Donovan told WSJ. Apple is now "in a Master's class."

Ars contacted both AT&T and Apple for further details about what was changed, but neither company offered any specific information. We do know, however, that the way the iPhone—as well as smartphones that came after it—use certain techniques for saving battery power that can bog down signaling channels on cell towers that aren't configured to handle signaling loads dynamically. The last we heard about significant changes in the 3G networking capabilities of the iPhone OS was in late 2008, though it's sure that Apple since tweaked the the network stack whenever needed.

AT&T learned that hard way that iPhone users didn't add network traffic in the same predictable patterns as users of other phones did. The company "is managing volumes that no one else has experienced," Donovan said. The growing pains that AT&T experienced as the iPhone skyrocketed to the top of the smartphone market in the US have left a number of users frustrated, with many willing to jump to another carrier if it could offer service for the iPhone. Half of Ars readers using an iPhone said they would switch to Verizon if a rumored CDMA-compatible iPhone materialized soon.

Verizon CTO Anthony Melone bragged late last year that the company was more than ready to handle the onslaught that iPhone users would bring to the network. "We are prepared to support that traffic," Melone told BusinessWeek.

That's easy to say, AT&T spokesperson Seth Bloom told Ars, "but the truth is no one knows what their network would look like if they had the iPhone." "Of course, it's true that others have been able to watch what we've done to handle a 5,000 percent surge in data traffic," Bloom added. "But watching is quite different from doing."

Bloom said that the upgrade to HSPA 7.2 and the added backhaul—"enough to also support our LTE buildout"—will keep AT&T ahead of the competition, which may be enough to keep iPhone users from looking at other carriers. "More and more people are going to get the benefits of 7.2 speeds this year on our network," he said.

Setting the record straight: no simple theory of everything

A bit over two years ago, news sources and science blogs lit up when a pre-print paper from Dr. A. Garrett Lisi came to light that proposed a novel theory of everything—one theory that accurately describes all four of the universes fundamental forces. Current theories have demonstrated that three of the four fundamental forces and their associated particles can all be obtained from different symmetry operations (think rotations and reflections) of an algebraic group called a Lie group The pre-print, hosted by the on-line repository arXiv, proposed that, within the complicated symmetry group E8, all four forces of nature could be described and united.

The hype that ensued (Google still suggests "surfer physicist" as a possible query) made this pre-print the most downloaded arXiv paper by March of 2008, and spawned an entire Wikipedia entry for the paper alone. As an engineer who specializes in theoretical work, it seemed to me to be a case of "give me enough parameters and I can fit a horse." Our in-Orbiting Headquarters physicist, Dr. Chris Lee, described it as solid, but noted it had some serious shortcomings.

In the intervening years, the paper—to the best of my knowledge and research ability—has not made it through peer-review to publication. A new paper, set to be published in an upcoming edition of Communications in Mathematical Physics, formally addresses the idea, and not only finds that Lisi's specific theory falls short, but that no theory based on the E8 symmetry group can possibly be a "Theory of Everything."

The new paper, which is also freely available as a preprint through the arXiv, is highly technical and lays out its case as a proof to a mathematical problem that attempts to define the criteria for a valid theory of everything. The authors begin by laying out three key criteria that a pair of subgroups on a Lie group must have in order to be a 'Theory of Everything.' The first is a trivial, yet purely mathematical, restriction that must hold true between the chosen sub-groups. The second is that the model cannot contain any "'exotic' higher-order spin particles." The final issue is that the gauge theory employed in our group must be chiral—a limitation dictated by the existing Standard Model.

In the original work by Lisi, it was proposed that the 248 dimensions of E8 corresponded to specific particles, either bosons or fermions—over 20 of which had yet to be discovered. The authors of the paper state that, in private communications, Dr. Lisi has backed off on this specific claim, and now states that only a subset of these dimensions represent actual particles. This modified version of the theory has yet to be publicly presented.

Going with the original version, one has to ask whether there is the correct amount of space within the potential subgroups to explain what we already know about. Math tells us that all the fermions—particles with half-integer spin—must come from what is known as the (-1)-eigenspace of the Lie group. Physics and math tell us that, in order to describe the known fermions, the subgroup of interest must have 180 dimensions.

Unfortunately, the (-1)-eigenspace on E8 (where the fermions must exist) has either 112 or 128 dimensions—too few to account for the known fermions. This goes directly against the claims made in Lisi's original work, and is not compatible with known spin theory and three generations of matter as described by the Standard model. The authors do point out that this result is not incompatible with a 1- or 2-generation Standard Model (as opposed to the accepted 3-generational Standard Model) being embedded in a real form of E8.

The authors go on, in a purely mathematical fashion, to prove that any "Theory of Everything" is not capable of meeting all three criteria in any real representation of E8, or even in a complex representation of E8. They show that any set of subspaces that meet the first criteria and either the second criteria or a relaxed version of it, will necessarily fail to meet the third.

When asked, Prof. Garibaldi, co-author of the paper and expert in exceptional Lie groups, stated that he felt an obligation to help set the record straight. "A lot of mystery surrounds the Lie groups, but the facts about them should not be distorted," he said. "These are natural objects that are central to mathematics, so it's important to have a correct understanding of them."

He went on to describe the work in easy to understand terms, and elegantly showed how disputes in science are handled. "You can think of E8 as a room, and the four subgroups related to the four fundamental forces of nature as furniture, let's say chairs," Garibaldi explained. "It's pretty easy to see that the room is big enough that you can put all four of the chairs inside it. The problem with the 'theory of everything' is that the way it arranges the chairs in the room makes them non-functional."

(An example of this being that one chair is inverted and stacked atop another chair—it's there, but it isn't useful for sitting.)

"I'm tired of answering questions about the 'theory of everything,'" Garibaldi said. "I'm glad that I will now be able to point to a peer-reviewed scientific article that clearly rebuts this theory. I feel that there are so many great stories in science, there's no reason to puff up something that doesn't work."

Google bakes Flash into Chrome, hopes to improve plug-in API

Google announced Tuesday that its Chrome Web browser will integrate Adobe's Flash plug-in. The latest version of Flash will ship with Google's Web browser, obviating the need for end users to download and install it separately. Google will also start regularly deploying new versions of Flash through Chrome's update system in order to ensure that users always have the latest version.

Google has also revealed that it will be working closely with Adobe, Mozilla, and other players in the Web ecosystem to improve the API that browsers use to support plugins. Such improvements could potentially help ameliorate some of the technical deficiencies that have plagued Flash and other plugins.

A new version of Chrome with the integrated Flash plug-in was rolled out yesterday to users of the Chrome developer channel. The Flash integration is not enabled by default because it is still highly experimental. It can be turned on by activating Chrome with the —enable-internal-flash parameter at the command line. The new developer version of Chrome also has a new plugin management interface that can be used to toggle which plugins are active.

Can Flash be fixed?

Although the Flash plug-in is widely used on the Internet, it is strongly disliked by a growing number of users. Websites like YouTube are seeing strong demand for adoption of standards-based alternatives to Flash. There are signs that disdain for Adobe's plug-in is becoming increasingly mainstream and is no longer just confined to the community of technology enthusiasts and standards advocates.

This trend is largely driven by the fundamental technical failings of Adobe's technology. The Flash plug-in is often criticized for its awful browser integration, poor performance (especially on Mac OS X and Linux) and stability, lack of conduciveness to accessibility, and excessive resource consumption. Another major problem is its frequent security vulnerabilities, which have made it a major target for exploits.

Adobe deserves much of the blame for Flash's defects, but the problem is also largely attributable to the underlying limitations of the framework that browsers use to enable plug-ins.

The original browser plug-in system, which is called the Netscape Plugin Application Programming Interface (NPAPI) was first introduced in Netscape Navigator 2.0. It is a historical irony that Adobe itself played a key role in influencing the earliest development of the plug-in, before even Flash existed. Several Adobe developers collaborated with Netscape to produce the API so that the nascent Acrobat Reader program could be embedded in the browser and be used to display PDF content on the Internet.

The plug-in architecture was basically designed for the purpose of running an independent program inside of the main browser window, but it has been stretched far beyond its intended capacity by modern plug-ins that attempt to provide a lot more functionality. Due to the limitations in the design of the plug-in API, Flash has to maintain its own insular universe inside of a rectangle that doesn't seamlessly mesh with the rest of the page.

Referred to as the "plug-in prison" phenomenon, this limitation has created a lot of barriers to making plug-ins like Flash operate as a native part of the Web. You can see its detrimental impact on the browsing experience in many areas—such as scrolling, keyboard navigation, text selection, and resizing—where Flash content simply doesn't conform with the expected behavior.

Improving the API

Mozilla and other major stakeholders have been working on an update to the plug-in API with the aim of improving the situation. It won't even come close to fixing all the problems with Flash, but it will begin to address certain critical issues. According to the documentation that has been published so far, the update will attempt to boost consistency between implementations, augment support for out-of-process plug-ins, and improve the way that plugin rendering integrates with browser compositing in order to fix layering glitches. There will also hopefully be some opportunities along the way for improving performance, stability, and security.

Mozilla started publicly working on it last year. Google has now affirmed its intention to join the effort and contribute to improving browser support for plug-ins.

"The browser plug-in interface is loosely specified, limited in capability and varies across browsers and operating systems. This can lead to incompatibilities, reduction in performance and some security headaches," wrote Chrome engineering VP Linus Upson in Google's official Chromium blog. "That's why we are working with Adobe, Mozilla and the broader community to help define the next generation browser plug-in API. This new API aims to address the shortcomings of the current browser plug-in model.

Google's interest in this effort seems somewhat inconsistent with the company's affinity for emerging native Web standards, but there are several relevant factors to consider. It's worth noting that Google itself has implemented its own browser plug-ins, such as the Native Client (NaCL) technology.

Improvements that Google contributes to the NPAPI could potentially be beneficial for NaCL, enabling Google to use it in ways that might not otherwise be possible or practical. Another relevant factor is the potential opportunity for ChromeOS. Strong support for Flash could potentially look like a competitive advantage for ChromeOS-based devices relative to Apple's competing products.

Are plug-ins still relevant?

Although there is clearly an opportunity for Adobe and browser vendors to make Flash behave better on the Web, it may never be a first-class part of the Internet. Indeed, one could argue that the idea of a proprietary vendor-controlled plug-in that loads interactive components into a Web page from a binary blob is fundamentally antithetical to the underlying design of the Web.

As the standards process accelerates and previously weak browsers like Internet Explorer start to catch up and deliver support for the latest functionality, the need for plug-ins is rapidly declining. As an example of the growing viability of the standards process, consider the nascent WebGL standard, which has gained broad acceptance and multiple implementations in a very short period of time.

There is no technical reason that prevents Adobe from participating in the Web like a good citizen. Instead of maintaining a plug-in, the company could propose new functionality for Web standards, contribute implementations to open source browsers, and then target its authoring tools to support those capabilities. The vast majority of companies that are involved in the Web ecosystem are committed to extending the Web in that manner because it's simply more consistent with how the Web works.

Now that all of the major browsers, including IE, are aggressively adopting emerging standards, the value of plug-ins is not as clear-cut as it once was. It appears that the primary reason why Adobe still pursues a plugin-based strategy at this stage is so that it can preserve the vendor lock-in that is inherent in having a proprietary plug-in.

Wednesday, 31 March 2010

90 percent of Windows 7 flaws fixed by removing admin rights

After tabulating all the vulnerabilities published in Microsoft's 2009 Security Bulletins, it turns out 90 percent of the vulnerabilities can be mitigated by configuring users to operate without administrator rights, according to a report by BeyondTrust. As for the published Windows 7 vulnerabilities through March 2010, 57 percent are no longer applicable after removing administrator rights. By comparison, Windows 2000 is at 53 percent, Windows XP is at 62 percent, Windows Server 2003 is at 55 percent, Windows Vista is at 54 percent, and Windows Server 2008 is at 53 percent. The two biggest exploited Microsoft applications also fare well: 100 percent of Microsoft Office flaws and 94 percent of Internet Explorer flaws (and 100 percent of IE8 flaws) no longer work.

This is good news for IT departments because it means they can significantly reduce the risk of a security breach by configuring the operating system for standard users rather than administrator. Despite unpredictable and evolving attacks, companies can very easily protect themselves or at least reduce the effects of a newly discovered threat, as long as they're ok with their users not installing software or using many applications that require elevated privileges.

In total, 64 percent of all Microsoft vulnerabilities reported last year are mitigated by removing administrator rights. That number increases to 81 percent if you only consider security issues marked Critical, the highest rating Redmond gives out, and goes even higher to 87 percent if you look at just Remote Code Execution flaws. Microsoft published 74 Security Bulletins in 2009, spanning around 160 vulnerabilities (133 of those were for Microsoft operating systems). The report, linked below, has a list of all of them, which software they affect, and which ones are mitigated by removing admin rights.

Self-published authors to get in iBookstore via Smashwords

Apple initially named five of the top six major publishers as launch partners for its iBookstore for the iPad. More recently, we heard that two independent publishers had signed deals to provide e-books and that Apple plans to offer free public domain titles from Project Gutenberg. Now, self-published authors will also get a crack at the iBookstore via deals Apple has struck with e-book publishing services Smashwords and Lulu.

Smashwords and Lulu are for e-books what TuneCore is for music. TuneCore will take your CD (or indie film) and upload it to the iTunes Store for a flat fee, eliminating the need to jump through all the hoops necessary to set up an account directly with Apple. All the royalties earned on sales of the album and individual tracks are then forwarded to the artist.

Smashwords works a little differently; instead of an up-front free, it takes a small percentage of the royalties that various e-book stores offer. However, authors merely need to upload a specially formatted Word document with the text of their book, along with an image of the cover. Smashwords uses tools that automatically convert the Word file into the formats specified for online e-book stores such as Barnes & Noble, Amazon Kindle, and Lexcycle Stanza, and uploads books to the stores that an author requests.

Details of a deal to add the iBookstore as a publishing option leaked after an e-mail to current Smashwords clients was widely published online. Though he couldn't comment on specific details, founder Mark Coker told Ars that he could confirm that Smashwords does have a deal in place with Apple.

The e-mail was sent to current Smashwords authors to make sure their books were ready in time to be on the iBookstore at this weekend's April 3 launch. It also explained that books destined for the iBookstore have a few extra requirements, which Smashbooks will make standard for future submissions. All cover images need to be at least 600px tall, titles must have a unique ISBN number (separate from the print version if one exists), and books must have a price that ends in ".99" (i.e. $4.99 versus $4.95).

Apple also requires that pricing for e-book versions should be less than a print version if one is available, and there are limits to the maximum iBookstore price for the first 12 months after release depending on the price of the print edition. Currently there is no option to sell books on the iBookstore without FairPlay DRM, and Apple didn't respond to our request for comment on the matter.

Smashwords gives authors and publishers 60 percent of the retail price of an e-book as a royalty, with Apple keeping the usual 30 percent and Smashwords keeping a 10 percent cut for its services. "Our general policy for the last two years at Smashwords has been to return 85 percent of the net to our authors and publishers," Coker told Ars. Even with Smashwords keeping a small percentage, though, that's a far better deal than the 35 percent authors are getting publishing directly on the Kindle.

It also appears that book publishing service Lulu will also begin to distribute books via the iBookstore shortly. Lulu will convert titles to ePub format automatically and upload them to the iBookstore unless authors specifically request that it doesn't. Further details about pricing and availability aren't known since no public announcement has been made, but would likely be similar to those offered by Smashwords.

More on next-gen iPhone and Verizon iPhone dreams

Apple plans to release new iPhone hardware this summer—this is widely accepted among the Apple community despite the lack of any announcement from Apple. But more rumored details about the hardware and its expected launch date supports Steve Jobs' promise that it will be an "A+ update." This new hardware will supposedly launch on AT&T in the US, but yesterday we heard from the Wall Street Journal that a CDMA-based version was destined for Verizon sometime this year. Several analysts are less certain than WSJ's sources, but Apple may be ready to make the CDMA leap to stave heated competition from Android.

After WSJ's story mentioned the obvious—that a new iPhone was coming this summer—Daring Fireball posted a number of expected features of the upcoming iPhone refresh. The device will likely be powered by a version (perhaps the exact same one) of the A4 processor inside the iPad. It may also have a 960 x 640 pixel display, a front facing camera, and iPhone OS 4.0 is expected to enable some form of third-party multitasking.

All of those prognostications are fairly safe. Jobs long ago said that PA Semi would be designing chips to power its mobile products, and if the A4 can get up to 10 hours of runtime for an iPad with its 9.7" LCD screen, it could likely give a big boost to the runtime for the iPhone in addition to snappy performance.

A 960 x 640 pixel display will help keep the iPhone in the running with some newer mobile devices with screen resolutions that exceed the current model's 480 x 320 display. The higher resolution also jibes with the "iPhone HD" moniker that Engadget sources suggest the new model will wear. The screen would still have the same 3:2 screen ratio with a doubling of pixel dimensions, meaning existing iPhone apps could easily be scaled to fit the new screen without much apparent difference in quality. However, apps updated for the higher resolution should look absolutely stunning.

The iPhone HD name may also refer to the camera hardware. Intel previously suggested that Apple would adopt a 5 megapixel OmniVision sensor that was effectively a drop-in replacement for the current iPhone's 3.2 megapixel sensor. The new sensor uses backside illumination for increased light sensitivity, but can also shoot HD video in 720p or 1080p HD resolution, trumping the current iPhone's limit of 640 x 480 SD video.

The front-facing camera has long been rumored as a hardware upgrade ever since the first iPhone launched in 2007. Such a camera would enable video calling or conferencing à la iChat AV or Skype. Several of the iPhone OS 3.2 betas contained resources necessary for video calling, and the feature may finally make an official appearance via a front-facing camera and iPhone OS 4.0. iPhone OS 4.0 is also expected to bring a "full-on solution" for running multiple third-party iPhone apps simultaneously.

Engadget's sources expect the new iPhone to launch on June 22. Daring Fireball suggests that date could be likely if WWDC is in early June. The new model would likely be announced during the WWDC keynote, giving developers a few weeks to update apps to take advantage of new features in time for its launch.

Despite what may prove to be a very compelling upgrade for current iPhone owners, what about those on non-GSM carriers? Apple has called CDMA a dead-end technology in the past, but an update to WSJ's report from yesterday about a likely Verizon-ready iPhone notes that Apple may have "changed its mind" given that LTE won't likely be widespread even on Verizon until sometime in 2011, and on AT&T until sometime in 2012 or later.

Several analysts expressed doubt that Apple would make the CDMA plunge this year, if at all. Sources for Morgan Keegan analyst Travis McCourt said that Qualcomm is working on a CDMA radio chip that could work with the iPhone, but that it won't be a dual mode CDMA-GSM chip that Apple wanted. Even if Apple could build a CDMA-capable iPhone, there's still the matter of Apple and Verizon being able to come to an agreement to provide service for the device. For instance, AT&T's $30 per month data plan includes unlimited data, while Verizon's similar plan for smartphones is capped at 5GB. Verizon would also have to implement the necessary back-end to support the iPhone's Visual Voicemail feature.

UBS analyst Maynard Um also agreed that a Verizon launch this year would be unlikely. However, both analysts suggested that a CDMA iPhone could launch with other carriers, especially those that have far less immediate plans to transition to 4G LTE. Possible alternate options include China Telecom (which doesn't used UTMS/HSPA for 3G), KDDI in Japan, SK Telecom in Korea, or even Sprint in the US.

Still, several metrics suggest that Google's Android platform is growing much faster than the iPhone, especially in the US, even though the iPhone has still outsold Android devices by a large margin. Part of the reason that Android is able to grow so fast is that it's available on multiple devices from multiple carriers. AT&T only has so many customers willing to go for the iPhone, and there are only so many willing to switch carriers to get one.

Our reader poll from Monday suggests that half of current Ars iPhone users would defect to Verizon if the iPhone was launched on that network, and plenty of current Verizon customers would be interested in an iPhone if it were available. A CDMA iPhone might not mean much for worldwide growth, given that there is still plenty of growth opportunity with GSM-based carriers. However, if Apple wants to maintain US smartphone market dominance, going CDMA might be the only option for the next few years.

Tuesday, 30 March 2010

Solaris 10 no longer free as in beer, now a 90-day trial

Solaris 10, the official stable version of Sun's UNIX operating system, is no longer available to users at no cost. Oracle has adjusted the terms of the license, which now requires users to purchase a service contract in order to use the software.

Sun's policy was that anyone could use Solaris 10 for free without official support. Users could get a license entitling them to perpetual commercial use by filling out a simple survey and giving their e-mail address to Sun. Oracle is discontinuing this practice, and is repositioning the free version as a limited-duration trial.

"Your right to use Solaris acquired as a download is limited to a trial of 90 days, unless you acquire a service contract for the downloaded Software," the new license says.

It's important to understand that this change will not affect OpenSolaris, which is still freely available under the terms of Sun's open source Common Development and Distribution License (CDDL). Users who don't want to pay for service contracts will be able to use OpenSolaris instead. That might not be particularly comforting to some users, however, because there are is some uncertainty about the future of OpenSolaris.

Although Oracle has committed to continuing OpenSolaris development, the company says that OpenSolaris might not get all of the new features that are being developed for the Solaris platform. Oracle says that it is reevaluating some aspects of the development process and isn't entirely sure how it will proceed.

It's not clear exactly how this will play out, but it seems that Oracle might choose not to open the source code of certain improvements in order to differentiate its commercial Solaris offerings. If Oracle decides to move in that direction, then OpenSolaris will no longer be able to serve as a drop-in free replacement for the latest commercial version of the operating system.

OpenSolaris contributor Ben Rockwood, a Sun Community Champion, discussed the licensing change in a recent blog entry. The licensing changes will be detrimental to smaller Solaris users, he says. He also expresses some concerns about the future of OpenSolaris due to the lack of communication from Oracle about the current status of the overdue OpenSolaris 2010.03 release.

"There may be attractive offerings for new customers in the high-end enterprise space, but long time supporters in smaller shops are going to get royally screwed," he wrote. "This might be a good time to catch up on non-Sun/Oracle distros such as Nexenta, Schillix, and Belenix."

Sun started giving away Solaris for free so that it could retain mindshare for the platform as Linux gained a broader presence. Oracle doesn't really need to do that because it has robust commercial offerings for both Linux and Solaris. Oracle can afford for Solaris to become the niche premium offering while Linux dominates most of the rest of the market. If dropping the free version gets some existing users to start paying for service contracts, than it's a win for Oracle. The downside is that the changes in licensing will exacerbate the growing rift between Oracle and the existing community of OpenSolaris enthusiasts.

Cable ISPs: new broadband test makes our service look slow

A new study charges that some of the Internet Service Provider speed test results that the Federal Communications Commission cites in its surveys are inaccurate. Specifically, tests conducted by the comScore market research group tend to give wrongly calculated lowball marks on ISP performance, says Netforecast—its work commissioned by the National Cable and Telecommunications Association.

comScore's various testing errors "result in an underreporting of the actual speed delivered by an ISP on its network, and the individual errors create a compounding effect when aggregated in an individual subscriber's speed measurement," Netforecast concludes. "The result is that the actual speed delivered by each ISP tested is higher than the comScore reported speed for each result of every test."

Not only that, but "other broadband user speed tests are also prone to the same data gathering errors," Netforecast warns.

Absolute indicators

comScore publishes market survey reports on online trends—everything from IP video use to music downloading. The outfit's surveys are constantly quoted by the big telcos, cable companies, and ISPs in their filings with the FCC. Comcast and NBC Universal, for example, repeatedly cite comScore stats in their brief asking the Commission to approve their proposed merger.

But it looks like big cable draws the line when it comes to comScore's assessment of ISP speeds—not surprising given that the NCTA has asked the agency to "continue to look at maximum advertised speed rather than some measure of 'actual' speed" in defining broadband. The Netforecast survey notes that the FCC has used comScore metrics "as an absolute indicator of specific ISPs' performance," but doesn't say in which report. The Commission most famously mentions them, however, in Chapter Three of its National Broadband Plan.

Citing comScore data, the 370+ page document concludes that the average advertised speed for broadband has gone up to the tune of 20% every year over the last decade. "However, the actual experienced speeds for both downloads and uploads are materially lower than the advertised speeds," the NBP adds. "The actual download speed experienced on broadband connections in American households is approximately 40-50% of the advertised 'up to' speed to which they subscribe. The same data suggest that for upload speeds, actual performance is approximately 45% of the 'up to' advertised speed (closer to 0.5 Mbps)."

Netforecast pushes back on all this, charging that comScore's testing assumptions are wrong. Specifically, they "overstate the disparity" between "median actual and maximum advertised speeds." Here's a thumbnail of Netforecast's analysis of comScore's methodology:

Severe limits

According to Netforecast, comScore client software applications, operated at home by consumer recruits called "panelists," run speed tests with the goal of reaching a test server every eighteen hours. If a broadband level speed is detected, the client downloads a file ranging in size from 1MB through 15MB. The test then crunches the results via a formula that multiplies the file size by 8 (for byte/bit conversion), then divides it by the test time and byte delay.

So, for example—according to Netforecast's representation of comScore's formula—a 15MB file taking 3.5 seconds to download with a minimum startup latency of .5 seconds will be parsed as so: (8*15,000,000)/(3.5 - 0.5) = 40,000,000 = 40.0 Mbps.

Netforecast identifies six problems with comScore's testing:

* In its calculations, comScore should have noted (Netforecast thinks) that a Megabyte equals 1,048,576 bytes, not 1,000,000. This resulted in an error factor of -4.5 percent in the example above—that is, an actual speed of 41.9Mbps, not 40 (see different definitions of megabyte here).
* Only one TCP connection is used each test. This "severely limits the accuracy of its results," the analysis contends. "Many speed test services operate multiple parallel TCP connections to more accurately and realistically measure ISP performance."
* Client-server factors leading to delay are not consistent in each trial. The system initiates a speed test from the comScore client to the server, which uses a reverse DNS lookup to determine the ISP network the client is on. It then determines the optimal server for the test. But: "The peering relationship with the panelist's ISP may be so complex that the test path introduces high delay," Netforecast warns. "Effective performance degrades when delay increases."
* The panelist's computer may have other software running during the test. "In fact, comScore recruits panelists by providing them software such as screen savers that operate when the panelist is not actively using the network," the critique contends. "The other software can reduce the computing resources available for the speed test."
* The test traffic may conflict with home traffic. A home Wi-Fi network could add complexity to the results, as could other PCs or machines connected to the network, or neighboring networks and cordless phones.
* The tests place subscribers in speed tiers higher than the one that they actually purchased.

And so, complains Netforecast, the effective service speeds comScore delivers are based on erroneous tests, while the advertised speeds are often wrong.

"It is essential that ISP speed tests be thoroughly understood and that their results are truly representative and accurate," the Netforecast analysis concludes. "The industry should define standardized and transparent targeted methodologies for ISP speed testing and foster their widespread adoption."

We contacted both comScore and the FCC for a comment about the report, but have yet to receive a reply.

Lawmakers want Google to Buzz off over privacy concerns

Google's Buzz social networking service, which launched as part of Gmail in February, was met with considerable controversy. The service automatically transformed users' e-mail address books into public Buzz contact lists, creating the potential for sensitive information to be exposed without user consent.

The Electronic Privacy Information Center (EPIC) and the Electronic Frontier Foundation (EFF) condemned Google's mismanagement of the service's rollout and lack of privacy safeguards. EPIC filed a complaint with the FTC, calling for the organization to review the matter. A bipartisan group of congressmen are the latest to join the chorus. In an open letter addressed to FTC chairman Jon Leibowitz, eleven members of the US House of Representatives called for an investigation of Buzz and closer scrutiny of Google's pending acquisition of mobile advertising company AdMob.

"We are writing to express our concern over claims that Google's 'Google Buzz' social networking tool breaches online consumer privacy and trust. Due to the high number of individuals whose online privacy is affected by tools like this—either directly or indirectly—we feel that these claims warrant the Commission's review of Google's public disclosure of personal information of consumers through Google Buzz," the letter says. "We hope to be of assistance to you in finding constructive solutions to fill in the gaps that leave our online privacy vulnerable to unsolicited intrusion."

The letter specifically highlights the contact list disclosure issue, but also raises questions about the implications of Google's advertising practices. The letter asks the FTC to determine if Google's acquisition of AdMob—and the resulting reduction in mobile advertising competition—will reduce incentives for the company to protect consumer privacy.

Google took swift action to correct Buzz's privacy problems shortly after the controversy erupted. The automatic contact-following behavior was replaced with a system that recommends people to follow. The service's underlying functionality was also made more transparent and the mechanisms for disabling the service were improved.

Although these changes have been broadly lauded as a step in the right direction, critics believe that Google needs to go further and make the service itself an opt-in offering. The forceful rollout of the service, and Google's move to inject it into Gmail as an unsolicited addition, are cited by EPIC and other privacy advocates as a serious breach of user trust.

This view is shared by outgoing FTC Commissioner Pamela Jones Harbour, who criticized Google in a recent panel about Internet privacy. In Harbour's opinion, Google's "irresponsible" launch of Buzz is representative of the broader privacy and security issues that society faces with the emergence of cloud computing. She fears that questionable privacy practices will escalate if steps aren't taken now on behalf of consumers. "Consumer privacy cannot be run in beta," she reportedly said.

As Google expands its reach into more corners of daily life, the company will face more stringent scrutiny. The Buzz privacy blunder, and the concerns that it has raised, have clearly penetrated the awareness of lawmakers and the policy community.

Monday, 29 March 2010

Yahoo wants two-faced DNS to aid IPv6 deployment

Many systems that purport to have connectivity to the IPv6 Internet, well, don't. According to measurements done by Google 18 months ago, about a third of a percent of all Web users' systems think they have IPv6, with huge regional differences. In reality, it doesn't work for 27 percent of those users. Last week at the IETF meeting in Anaheim, engineers from Yahoo proposed to solve this problem by only exposing a server's IPv6 addresses if a DNS query comes in over IPv6.

Today, the 0.09 percent of Web users with broken IPv6 suffer significant timeouts if they, for instance, aim their Web browser at an IPv6-enabled site. The browser will first try to connect over IPv6 for upwards of a minute before giving up and retrying over IPv4. This is a big problem for important Web destinations such as Google and Yahoo, because they don't want to lose 0.09 percent (or more, as IPv6 use increases) of their visitors and therefore, revenue.

Google has "solved" this problem with its Google over IPv6 program which requires DNS server operators to get whitelisted. Users of whitelisted DNS servers subsequently receive google.com's and youtube.com's IPv6 addresses as well as the usual IPv4 addresses when they perform a DNS query for the addresses that go with those DNS names. Everyone else gets only the IPv4 addresses. Apparently, Google, Netflix, and Microsoft have been exploring the possibilities of a shared, industry-wide IPv6 whitelist.

However, Yahoo is taking a different approach. If a user is performing DNS queries over IPv6, then obviously his or her IPv6 connectivity works. So exposing IPv6 addresses to users sending DNS queries over IPv6 should be fairly risk-free. Everyone agrees that this solution, like the whitelist solution, is rather ugly. This means implementing "two-faced DNS": a DNS server that gives different answers to different people performing the same query. Obviously, such practice isn't particularly DNSSEC-friendly. (But that can be solved by also giving DNSSEC enabled users the IPv6 information.)

There are two problems with Yahoo's approach. First of all, mechanisms for computers to learn the IPv6 addresses of nameservers are lacking. Unlike IPv4, IPv6 often doesn't use DHCP (many systems, such as Windows XP and Mac OS X don't even support IPv6 DHCP). One alternative mechanism to learn IPv6 DNS server addresses, RFC 5006, is even less widely deployed. So most systems that have both IPv4 and IPv6 connectivity perform their DNS requests over IPv4.

The other issue is that there is at least one other server between a Yahoo user's computer and Yahoo's DNS servers. If that server is operated by people who are oblivious to IPv6, it's unlikely that they will configure it such that it only gives out Yahoo's IPv6 addresses to users who send queries over IPv6. So the whole thing hinges on the cooperation of those network operators who are breaking IPv6 connectivity in the first place.

If this is the only way that content networks such as Yahoo and Google are prepared to become IPv6-capable, it's still better than nothing. And perhaps this downside will be addressed when the Yahoo engineers work out the details of this proposal, which is so far just a set of presentation slides.

In the meantime, it would be nice if network operators wouldn't arbitrarily block IPv6 packets inside IPv4 packets, thereby disabling "IPv6 tunnels," and for people who enable IPv6 to make sure it keeps working after the initial excitement of running the new protocol wears off.

Appeals court strikes down another generic biotech patent

Last week, the full US Appeals Court for the Federal Circuit upheld an earlier ruling by a partial panel, invalidating a biotech patent that originated in research at MIT and Harvard. The patent covered any of three ways to disable a signaling pathway involved in the immune response, and would have enabled its licensee, Ariad Pharmaceuticals, to go after companies that already have drugs on the market. The court held, however, that simply specifying different ways of interfering with a protein without any written description of how to do so constituted insufficient grounds for granting a patent.

This case, and a similar one (University of Rochester v. Pharmacia) that served as precedent, both followed a similar pattern. In each case, basic research in a university context identified a key protein involved in inflammatory responses. For Rochester, it was they enzyme Cox-2; drugs that inhibit it included Celebrex and Vioxx, both painkillers with lower risk of stomach irritation than aspirin. In the new case, it was the NF-kappaB signaling pathway, which is involved in the immune response to pathogens. Excessive activation of the NF-κB creates chronic inflammation. In this case, Eli Lilly had two drugs already on the market.

In each case, the patent that was granted contained a generic description of how to inhibit the protein involved. For the Ariad patent, three methods were mentioned: blocking the signaling pathways that activate NF-κB, reducing the protein's activity, and preventing the protein from binding DNA. Neither patent specified the actual biochemical mechanism for performing any of these inhibitory functions, nor did they describe any substances that could actually do so. In short, they covered any possible method of targeting a specific biochemical pathway.

As soon as the patents were granted, the grantees turned around and sued the pharmaceutical companies that had actually done the hard work of finding an implementation of these generic concepts.

The first suit, which was decided in 2003, didn't go well for the patent holders, as a judge ruled that the patent, as granted, didn't include a sufficient description of an actual invention. The new case resulted in a jury trial, which Lilly lost, leading its lawyers to request that the verdict be vacated as a matter of law. The presiding judge declined, leading to the appeals process. An initial ruling by a three-judge panel overturned the Ariad patent on the grounds of an insufficient written description, but the full court decided that the issue was significant enough to merit consideration.

The relevant passage of the US patent code was quoted in full in the decision:

The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.

To a certain extent, the argument is over where to put the commas in the statute, which determines what, precisely, needs to appear in the written description. Based on its reading, the Court has determined that "nothing in the statute’s language or grammar that unambiguously dictates that the adequacy of the 'written description of the invention' must be determined solely by whether that description identifies the invention so as to enable one of skill in the art to make and use it." So, even though a skilled biochemist could view the rough descriptions in the patent and know how to develop an inhibitor, that alone is not enough to mean that the patent is valid.

With that interpretation of the law stated, the court focused on whether the patent defined anything that could reasonably be described as an invention. "Every patent must describe an invention," the decision states. "It is part of the quid pro quo of a patent; one describes an invention, and, if the law’s other requirements are met, one obtains a patent."

Its conclusion is that this sort of description is nowhere to be found. Using biological terminology, the Court concluded that the patent describes what it terms a "genus" of inventions, but neglects to specify any "species"—meaning actual chemicals—that are sufficient to show that the patent holder can actually claim to have implemented the genus. Instead, using language from an earlier decision, it dismisses this approach as "no more than a 'wish' or a 'plan'." Just as the court decided in the Rochester case, without a description of an actual chemical that performs these functions, a patent cannot be considered to have provided an adequate written description.

Although it simply reaffirms the Rochester decision, the new opinion attempts to explicitly set a standard for written descriptions that apply generally. And the standard it sets is a rather significant one. Based on the last several decades of biochemistry and molecular biology, it's really easy to identify proteins that help regulate a variety of essential processes, and suggest a variety of routes to inhibiting them. Actually developing something that does so successfully is where the hard work and creativity—the inventiveness, as it were—comes in. In short, this interpretation of the statute appears to bring patent law more in line with the general intent of the patent system.

The decision is also a good sign that the Appeals Court (or at least its clerks) have come to grips with modern biology. The decision includes a decent description of the NF-κB signaling process, and casually discusses testimony regarding whether the protein has a transcription activation domain that's distinct from its DNA binding domain. It's not a stretch to expect that a court that finds the biology mysterious will be more likely to grant deference to technically complex but flawed patents, to the detriment of both research and society. The fact that the court is comfortable discussing biochemistry suggests that this risk is receding.

Sunday, 28 March 2010

AVG Rescue CD Cleans Your Infected Windows PC

There's any number of great antivirus tools that help protect your PC from viruses, but what about when you encounter an already-infected PC? Your best bet is a boot CD, and the free AVG Rescue CD cleans viruses easily.

The AVG Rescue CD comes in two flavors: an ISO image that can be easily burned to an optical disc, or a compressed version that can be installed to a bootable flash drive. Once you've done so, you can simply boot from the drive of choice directly to the AVG menu, where you can scan for viruses, edit files, test your drive, or even edit the registry. Since the bootable CD is based on a version of Linux, you can also access a number of common Linux tools to make changes to your system and hopefully make it bootable again.

The AVG Rescue CD is a free download for anybody, cleans viruses from Windows or even Linux PCs, and is a great addition to your PC repair toolkit. If you need some help setting up the bootable USB flash version, check out the Guiding Tech tutorial for the full walk-through.



ZeuApp Downloads 82 Awesome Open Source Apps

Windows: If you're setting up a new system or helping a friend to see how much great free and open source software exists, ZeuAPP is a portable installation tool for nearly a hundred applications.

ZeuAPP is essentially an installation dashboard for 82 applications. You can navigate to application types like CD Burners, P2P apps, Office apps, and more. Under each tab are applications for that category with a "Download" and "Visit Website" button which allow you to download the application and automatically launch the installer or visit the web site for more info.

ZeuAPP is freeware, portable, and Windows only. Looking for something that'll also quickly grab and install your favorite non-open source apps? Check out previously mentioned Ninite.



Controlling Mars rovers: there's an app for that

What if, instead of pocket-dialing, you could pocket-send-a-Mars-rover-over-a-cliff? That was the goal of two programmers at EclipseCon 2010 (via Slashdot). A competition at the conference asked developers to either create an e4-Rover client or use one to move a demo robot over a model Mars landscape. Two participants, Peter Friese and Heiko Behrens, built the robot-controlling client into an iPhone application.

Entrants could win the rover challenge at EclipseCon in one of two ways. The goal of the first competition was to create the most "attractive, usable, and effective" robotic command-and-control system based on e4, as judged by a panel. The second competition involved using the client to maneuver the provided robot over a landscape, earning points for completed tasks and getting the highest score.

Friese and Behrens built an iPhone application to control the robot using the phone's accelerometer, tilting it around to guide the rover in various directions. They won neither of the two categories, which awarded prizes including a tour of a NASA robotics lab, Lego robotics sets, and credits for Amazon Web Services. Until we all get personal Mars rovers, the realistic implications of the app are small; however, these developers certainly have a jump start on a Mars rover game.