Saturday 27 March 2010

GameStop sued over one-time use codes, deceptive advertising

Used game sales are not popular with publishers who only see profits when new copies of games are sold. Increasingly, games are now sold with one-time use codes that give customers extra content or features; those buying the game used will have to pay that same fee to access this content. GameStop is facing legal action due to this practice for one simple reason: the game box advertises the content, but the stores don't disclose the fact that the content costs extra for those who buy used.

A class-action lawsuit against the retailer has been filed in California, claiming that the game boxes claim the content is included, when the code has most likely been used or is missing completely from the second-hand packaging. GameStop's return policy allows returns within seven days, which the suit claims is not enough time to protect consumers.

"The availability of this additional content is prominently advertised on the packaging of these games," the suit states. "Despite the representations on the packaging that the game comes with a free use code, unbeknownst to consumers who purchase a used copy of one of these games, upon attempting to download the content identified on the game's packaging, consumers are unable to do so unless they pay an additional fee.

"In short, as a result of GameStop's deceptive and misleading practices, consumers who purchase used games from GameStop unknowingly find that they must pay an additional fee to access the full game they though they purchased."

The suit even includes scans of the back of certain game boxes, such as Dragon Age Origins, which states that it includes a downloadable character and quest, a $15 value. In smaller print it clarifies that the content is given via one-time use code, available with "full retail purchase." On the back of Gears of War 2: Game of the Year Edition, the text reads "Includes 19 extra maps and an additional campaign chapter." In tiny print under that, it states "Download card included."

Is this deceptive?

The text simply says that the codes are included with retail purchase, and in this case, they're not going to be. GameStop doesn't sticker these games with warnings about missing content, and store staff is under pressure to push preowned product over new as the profit margins on used content is much higher than new copies of the game. It's unlikely store clerks are going to stick their necks out and explain that the price after purchasing the extra content is going to be higher than it would have been to simply buy the game new.

With most of this content coming in at $10 to $15 to download if you don't have the single-use code, and GameStop's practice of selling games for approximately $5 less than new, these games suddenly become a very bad deal for consumers. Without education on the nature of this content in the store, misunderstandings over what is actually included in the box may be more common than we think.

In class-action suits the biggest winners are usually the lawyers, but if GameStop is forced to be more upfront about what is included in these game boxes future consumers will be able to make smarter purchasing decisions. As the practice of add-on content as a reward for buying a game new becomes more prevalent, this issue will only become more important.

Almost half of poor Americans go to the library for Internet

There's more data coming in on the extent to which low income Americans depend on public institutions for broadband. A new report released by the Bill and Melinda Gates Foundation says that 44 percent of those living below the poverty level access e-mail and the Web via their local public library. And nearly a third of Americans over 14 used library Internet services in 2009. That's about 77 million people.

The study was based on almost 50,000 telephone and Web form surveys. It also found that:
  • Forty percent of those 2009 users accessed library Internet resources to find employment. Seventy-five percent of these looked for a job online. Half posted their resume or filled out an online job application.
  • Another 37 percent researched some illness or medical problem, or searched for or made an appointment with a doctor.
  • Forty-two percent used their local library's Internet for education; over a third of these did their homework online. A big portion of these users were teenagers.
  • Sixty percent accessed a library computer to contact someone else.
The study is further confirmation (if more is needed) that low income Americans know that broadband is now an absolute necessity in this economy. It's also more evidence of the huge pressure on libraries to meet this demand. About a third of libraries say they lack both the 'Net connections and staff power to provide the services for which low income patrons ask.

Big telecom Qwest wants broadband stimulus bucks

The dominant carrier of the western and southwestern United States has announced that it will apply for a $350 million broadband stimulus grant from the Department of Agriculture's Broadband Initiatives Program (BIP). Qwest says it wants to build infrastructure to link over half a million households, hospitals, businesses, and schools that currently have no access to broadband. We're talking download speeds of 12 to 40 Mbps, according to the telcos' press statement.

"Much like the water and electric programs the government established to encourage rural development, federal grants are needed to enable the deployment of broadband to high-cost, unserved areas," Qwest Vice President Steve Davis says in the release.

The news triggered a wry commentary from Karl Bode over at DSL Reports, who notes Qwest's history of trying to block smaller providers access to utility poles, opposing Seattles' fiber development projects, and missing BIP's first stimulus application round, all the while dressing itself up for a possible sale.

"It seems a little counterproductive to give Qwest taxpayer money for network builds they were unwilling to do," Bode notes. "Consider this again: we'd be giving taxpayer money to a company that spent millions of dollars fighting towns and cities from using taxpayer money to wire themselves when Qwest wouldn't."

Qwest covers most of the western US, save Nevada and California. The company's statement doesn't explain where exactly the telco plans to roll out these broadband lines, and the application isn't up on BIP's application database yet. But whatever the carrier is planning on doing, it'd better hurry up. BIP says the deadline for infrastructure project applications is this Monday, March 29, at 5 PM EDT.

Harvard profs trash ACTA, demand oversight, threaten lawsuit

Harvard Law School professors Lawrence Lessig and Jack Goldsmith took to the op-ed page of the Washington Post today to slam the Obama administration's approach to the Anti-Counterfeiting Trade Agreement (ACTA)—and to threaten a lawsuit if ACTA is signed without Congressional oversight.

The US has positioned ACTA as an executive agreement rather than a treaty. Such a move means that ACTA doesn't need Senate approval, but it also means that the agreement should not alter US law, either. If you want to change the law, you go to Congress.

Lessig and Goldmsith argue that ACTA, at least it its current leaked form, does involve "ideas and principles not reflected in US law."

Example number one is a pretty poor choice, in our view; the professors say that "ACTA could, for example, pressure Internet service providers—such as Comcast and Verizon—to kick users offline when they (or their children) have been accused of repeated copyright infringement because of content uploaded to sites such as YouTube."

As we've noted before, though, the language here comes from the Digital Millennium Copyright Act—already US law. The leaked drafts show that ISPs need policies in place to deter repeat infringement; a footnote suggests that "three strikes" Internet disconnections might be one appropriate way to do this, but would not be required.

The more fundamental complaint is that the president simply doesn't have the power to negotiate executive agreements on IP law and communications policy.

"The administration has suggested that a sole executive agreement in this instance would not trample Congress's prerogatives because the pact would not affect US domestic law," write the professors. "Binding the United States to international obligations of this sort without congressional approval would raise serious constitutional questions even if domestic law were not affected."

They recommend that Congress stand up for its rights and insist on being consulted—something that (now Vice President) Joe Biden did to the Bush administration when he was a senator. If ACTA is signed without such oversight, Lessig and Goldsmith say it will "be challenged in court."

Friday 26 March 2010

Google reportedly shares mobile ad revenue with key partners

Google's Android mobile operating system is gaining considerable traction in the smartphone market. According to a report from mocoNews, the secret behind Android's success is a series of revenue-sharing agreements that Google has signed with carriers and even some handset manufacturers.

The specific details are somewhat hazy, but the report cites unnamed sources who say that Google is giving its partners a cut of the advertising revenue generated by its mobile users. These deals are said to be isolated to the companies that are shipping Google Experience devices—handsets that come preinstalled with Google's branded mobile applications.

Mobile advertising is still at a relatively nascent stage of its evolution. A New York Times article that was published earlier says that mobile ads in 2009 generated less than one-third of one percent of all total ad revenue. Despite the current lack of compelling demand for mobile advertising, there appears to be a whole lot of potential for growth, which is why Google and Apple are both making big investments. Google's $750 million acquisition of mobile advertising firm AdMob is pending and Apple reportedly paid $275 million for Quattro, one of AdMob's competitors.

The availability of Android's source code and the absence of licensing fees helps to make Android adoption an easy choice for carriers and handset makers, but the revenue sharing is possibly how Google seals the deal and attracts loyalty.

Cloudy with a chance of Linux: Canonical aims to cash in

Although Ubuntu is generally regarded as a desktop Linux distribution, the sever variant is becoming increasingly popular in the cloud. It is silently infiltrating server rooms and gaining traction in enterprise environments. A recent survey published by Canonical provides some insight into adoption trends of Ubuntu on production servers.

As Ubuntu's presence in the server space grows, it is showing up in some unexpected places. Weta Digital, the New Zealand company that did the special effects for Lord of the Rings and some of the 3D rendering for Avatar, reportedly runs Ubuntu on its 35,000-core render farm and virtually all of its desktop computers. The Wikimedia Foundation, the organization behind the popular Wikipedia website, rolled out Ubuntu on 400 of its servers in 2008. We even use Ubuntu ourselves on several of the key servers that power the Ars Orbiting HQ.

One of the factors that is potentially driving Ubuntu adoption on servers is the pricing model. Canonical makes Ubuntu updates available for free, but also offers commercial support as a separate service. As companies that deploy Linux cultivate better in-house support capabilities, they want commercial Linux support options that are more granular and less expensive.

Some of the major Linux companies like Red Hat have been slow to respond to that demand, creating an opportunity for alternatives like Ubuntu and CentOS to gain traction. We looked at this phenomenon in 2008 when Ubuntu was beginning its ascent on servers.

It's worth noting, however, that the Linux server market is big enough to accommodate a plurality of players with very different models. Canonical is still a small fry and won't be displacing the reigning incumbents in the immediate future. Even though companies that want to decouple support costs from updates are starting to shy away from Red Hat, it's pretty clear from Red Hat's latest impressive quarterly earnings that there is no shortage of companies for whom its current offerings are still highly desirable.

One way in which Canonical is aiming to differentiate its server offerings is by emphasizing Ubuntu's support for the cloud. In the Ubuntu server survey, Canonical cites statistics from Cloud Market which show that Ubuntu is the most popular platform on Amazon's Elastic Compute Cluster (EC2), representing over 30 percent of all EC2 platform images.

Cloud Market also has a timeline that shows the usage rates of various platforms on EC2 since May 2009. The chart shows a trend of massive and rapid growth for Ubuntu starting in October 2009. This correlates with the release of Ubuntu 9.10, the first version of Ubuntu for which official Amazon EC2 images were made available.

Although EC2 is relatively popular, there are some workloads for which it is not a particularly good value and there are a lot of companies that are unwilling to entrust sensitive data to a third-party provider. When consulting firm McKinsey & Company tackled these issues last year, they pointed out that companies for whom EC2 is a poor fit can still improve cost-efficiency by using virtualization technologies to boost server utilization in in-house data centers.

One potential solution is Eucalyptus, an open source framework that allows companies to create their own self-hosted elastic compute clusters. Eucalyptus is built to be interoperable with EC2 APIs, which means that it is largely compatible with tools that are designed to work with Amazon's cloud. The researchers behind Eucalyptus launched a company last year with the aim of commercializing the technology. They recently brought in Marten Mickos, former head of MySQL, to be the new CEO.

Eucalyptus is said to be a compelling option for companies that want to have elastic computing capabilities in their own data centers. Canonical uses the Eucalyptus technology as the foundation of its Ubuntu Enterprise Cloud (UEC) package, which allows companies to roll out a private Ubuntu-based cloud. Canonical sells a number of services on top of UEC, including consulting, training, support, and management tools.

Canonical is building a roster of partners to boost the strength of UEC. In an announcement earlier this week, Canonical revealed that Dell will offer UEC software and technical support with several enterprise hardware packages. The bundle will be based on the upcoming Ubuntu 10.04, a long-term support release.

"Behind the scenes we've worked with Dell's DCS team for over six months to test and validate the integration of the [Ubuntu] cloud stack on their new PowerEdge-C series," wrote Canonical global alliances director Mark Murphy in a statement. "This is the first major offering of a true open source Cloud solution backed by a major corporate vendor."

The cloud is playing an increasingly central role in Canonical's evolving business strategy. The company's commitment to UEC and Ubuntu's popularity on EC2 are both clearly growing. At the same time, Canonical is attempting to monetize the desktop with its integrated Ubuntu One cloud service. As Canonical climbs towards profitability, the cloud-centric strategy could give it a lift.

Moving beyond silicon to break the MegaHertz barrier

We're rapidly closing in on a decade since the first desktop processors cleared the 3GHz mark, but in a stunning break from earlier progress, the clock speed of the top processors has stayed roughly in the same neighborhood since. Meanwhile, the feature shrinks that have at least added additional processing cores to the hardware are edging up to the limits of photolithography technology. With that as a backdrop, today's issue of Science contains a series of perspectives that consider the question of whether it's time to move beyond semiconductors and, if so, what we might move to.

The basic problem, as presented by IBM research's Thomas Theis and Paul Solomon, is that scaling the frequency up has required scaling the switching voltage down as transistors shrink. Once that voltage gets sufficiently small, the difference between on and off states causes problems from some combination of two factors: the off state leaks (leading to heat and power use problems), or the device switches slowly, meaning lower clock speed. Faced with a "choose any two" among speed, size, and power, we've been doing pretty well via chipmakers' focus on the latter two, but that's now gotten physicists and materials scientists thinking it might be time to look elsewhere.

With four individual perspectives loaded with technical information, it's not realistically possible to dive into the details of each, so what follows is a top-down overview of some of the arguments that are advanced by the various authors.

Forget clockspeed entirely

It's not that the authors of this perspective think continued progress in the sort of electronics that appear in laptops is unimportant; they just suggest it will be increasingly less interesting as we focus on small, flexible systems that can be put in portable devices like smartphones, integrated into things (like clothing) that don't currently contain electronics, and ultimately find their way into implantable medical devices. For all of these applications, flexing and stretching are more important than raw speed.

We've covered a variety of approaches to getting bits to bend, and the perspective breaks approaches down into two basic categories: either make the electronics flexible, or make them small, and connect them with flexible material. In the former category, the obvious choice would be some sort of organic transistor, but the authors suggest that the need for this is overstated. If standard silicon is fashioned into a silicon ribbon, it's actually remarkably robust when flexed. The trick is to embed the ribbon in a stable, flexible substrate, as well as accepting that the device will never have the same power as a complex, multi-layer chip of the sort that we use today.

The alternative is to make the electronics rigid, but extremely small and simple, so that they don't occupy much space. These mini-chips can then be embedded in a flexible material without changing its bulk properties. All that's left is connecting them up and providing them with power, but a number of materials—metals, silicon, and a carbon-nanotube derivative called "buckypaste"—can provide flexible and bendable wiring. Both approaches are already working in the lab, and the primary challenges tend to involve integrating materials that have very different properties in terms of hydrophobicity, heat dissipation, etc.

More bang for your volt

The IBM duo mentioned above reason that, if the problem is that we can't switch existing gates well with small voltage changes, it's time to find a switch that will amplify the impact of a small voltage change. So they consider two approaches that allow a voltage change to have nonlinear effects. The first is something called "interband tunnel FET." In the on state, electrons have easy access to a valence band they can tunnel into. A small change in voltage, however, makes this valence band inaccessible, creating a sharp, and leakage-proof off state. The problem with this approach is that, right now, we can make these devices with carbon nanotubes, but not silicon.

The alternative is to create some sort of gain device into the circuitry that amplifies a small input voltage. A sandwich of ferroelectric and dielectric layers will apparently allow the ferroelectric layer to switch its bulk behavior between two polarization states, giving a small voltage input an all-or-nothing impact. Adding these devices would obviously increase the size of a gate but, at the moment, the real problem is switching speed: theoretically, these things could switch in less than a picosecond, but actual implementations are taking 70 to 90ps.

Forget silicon entirely

The remaining two perspectives focus on the promise of transition metal oxides. The unusual electronic properties of these materials were made famous via high temperature superconductivity, but it's a very diverse group of materials with a huge range of properties. Bonds between oxygen and metals like titanium and lanthanum are extremely ionic in nature, which brings the large, electron-rich d-orbitals of the metals to the fore. Depending on the precise structure of the material and the additional metals present (Zn, Mg, and Sr appear common), the large collection of d-orbital electrons act as a bulk material.

And, just like any bulk material, the electrons can have phases, including solids, liquids, gasses, superfluids, and liquid crystals; there are also property-based phases, like spin- and orbital-liquids. Where there are phases, there are phase transitions, which can be induced by electric and magnetic fields, among other factors. So, the potential is there for a small input to have a significant impact on a large collection of electrons. So far, the first demonstrations of this have come in the form of different types of RAM based on ferroelectric, magnetic, and resistance.

Things get even more interesting when the interfaces between different oxide layers are considered. We've covered one report in the past that described how the interface between two transition oxide insulators could allow superconductivity, and a variety of other interesting effects are described here. Some of these have already been demonstrated to switch states at features below 10nm; an atomic force microscope has created conducting lines at 2nm resolution in a different material.

A decade or more ago, the problem with these materials was having any control over their formation, but we've now gained the ability to deposit layers of the stuff with precisions of a single unit cell of the crystal. The roadblock now is theory; as one perspective puts it, the large numbers of electrons present create a many-body problem that we can't really solve. More generally, there are a lot of transition metals, and a lot of complex oxide combinations (LaAlO3-SrTiO3 and La2/3Ca1/3MnO3 are just two of the many combinations mentioned). Right now, theory simply hasn't reached the point where we can accurately model the effect of bringing these materials together, which makes designing anything with specific properties very hit-or-miss.

The overall message is that we're a long way from seeing anything resembling these ideas in a device, with the possible exception of bendable circuitry. For the moment, this hasn't been a crisis, as the fab-makers have managed to stretch out photolithography, and multicore processors are being put to reasonably good use. Still, the payoff from additional cores is likely to shrink fast, and it's nice to think that there may be something on the horizon that could restart a MegaHertz race.

Thursday 25 March 2010

IE8, Safari 4, Firefox 3, iPhone fall on day 1 of Pwn2Own

The first day of the annual Pwn2Own contest in which security researchers can win cash and hardware if they successfully compromise machines using zero-day exploits is finished. Internet Explorer 8 on Windows 7, Firefox 3 on Windows 7, Safari 4 on Mac OS X 10.6, and iPhone OS 3 were all compromised during the competition. Google's Chrome was the only browser left standing—and in fact, was completely untested. None of the researchers at the competition even tried to attack Chrome.

So far, little is known about the successful exploits. Until vendors have been informed of the flaws and those flaws have been patched, details will not be made public.

The iPhone was not successfully hacked in 2009's competition, but was predicted to fall this year, and those predictions have come true. A zero-day Safari flaw was used to gain access to text messages stored on the device by Vincenzo Iozzo from German security firm Zynamics and Ralf-Philipp Weinmann, a post-doctoral researcher at the University of Luxembourg. Notable in the exploit was that it bypassed both iPhone's Data Execution Protection as well as its requirements that all code be signed.

A little more is known about the IE8 exploit, including an (abridged) video of the browser being taken down. The successful researcher, Peter Vreugdenhil, has published a rough outline of the techniques used to bypass IE8's DEP and ASLR protections.

The Safari hack came from Charlie Miller; this makes three years in a row now that Miller has pwned—and hence owned—a Mac at pwn2own. Thus far, nothing further about either this exploit or the Firefox one appears to have been published.

Neither the iPhone exploit nor the IE8 exploit managed to escape the OS-supplied sandboxes that protect these platforms. Without escaping the sandboxes, the impact that flaws can have is reduced, preventing, for example, writing to hard disk (and hence, preventing installation of malware). Nonetheless, read-only access is still valuable for data theft.

It is this sandboxing that might explain why Google's Chrome was untouched; no researcher even attempted to attack it. It is certainly not the case that Chrome has no security flaws—a couple of days before the Pwn2Own draw was made to decide who got to attack which machine and in what order, Google published an update to Chrome that fixed a range of security flaws, some of which were deemed to be high-risk. Google's sandboxing shouldn't be impenetrable, but it is sufficient to make the standard harmless exploit payload—starting up Windows calculator—harder to do.

As much as one percent of the Internet is now using IPv6

This week, the IETF is holding its 77th meeting in Anaheim, California. Last year around this time, the IETF met in San Francisco, and the Internet Society took advantage of this large gathering of Internet engineers to promote IPv6 and tell us that that it's high time to trade in the dusty 1980s Internet Protocol for the shiny 1995 version. Tuesday, the news was that people are actually starting to heed the advice.

Geoff Huston of APNIC, the registry that gives out IP addresses in the Asia-Pacific region, looked at various numbers that could tell us how much traction IPv6 is gaining. One metric that's easy to observe is the global routing table. After all, if you want people to reach your IP addresses, you'll have to tell them what those addresses are so packets can be routed in the right direction. This is done with the BGP routing protocol.

Currently, there are more than 322,500 IPv4 address ranges announced in BGP, and 2,770 IPv6 address ranges. However, because of the need to conserve IPv4 addresses, ISPs get small IPv4 blocks and frequently have to come back for more. Meanwhile, they can get a single, huge IPv6 block all at once, making this an apples-to-oranges comparison.

The number of BGP-capable networks that are sending out an IPv6 announcement is a more useful measure, and this totals 34,214 for IPv4 and 2,090 for IPv6. So 6.1 percent of all networks have IPv6 enabled in their routers. This metric is expected to reach 80 percent by 2017 if current trends continue.

Transit networks—ISPs that in turn have ISPs or other BGP-capable networks as their customers—have relatively high IPv6 deployment: 22 percent.

Huston also looked at the ratio between the number of distinct IPv4 and IPv6 addresses seen by the Web servers of APNIC and its European counterpart RIPE. This ratio is now slightly over one percent. Caveats apply, however. Multiple IPv4 users may share an address through Network Address Translation while a single IPv6 user may be using different addresses over time due to address privacy mechanisms. Also, the Regional Internet Registries aren't exactly mainstream Web destinations; their visitors are very likely more IPv6-capable than average.

Comcast's Jason Livingood had some information to share about the IPv6 trial the cable giant recently initiated. More than 5,400 people volunteered, with some even changing ISPs to be able to do so. Comcast already has a number of systems running native IPv6: the backbone network, peering points (where traffic is exchanged with other ISPs and content networks), "converged regional area networks (CRANs)," DNS servers (authoritative and resolvers), DHCP servers, and the provisioning system. Next are the Cable Modem Termination Systems (CMTSs), CPEs (Customer Premise Equipment), and home gateways.

Comcast has its own IPv6 monitor, which shows that IPv6 reachability for the top 1 million websites is only about a sixth of a percent. Another IPv6 deployment monitor shows 2.4 percent of the Alexa Worldwide Top 500 Web Sites have IPv6 addresses in the DNS, and several country top 500 lists have percentages ranging from 0.8 (US and UK) to 2 (China).

Yet another metric is provided by the Amsterdam Internet Exchange. Its IPv6 traffic has been hovering around the 0.2 percent mark for the past year. This is traffic exchanged by some 200 large and small ISPs and content networks present in Amsterdam, reaching almost a terabit at peak times, with about 1.5 gigabits of IPv6 traffic. Actually, 0.2 percent is higher than expected: if one percent of all users have IPv6, and one percent of all servers have IPv6, that would make one percent of one percent of all traffic IPv6 traffic.

Some of this maddening inconsistency can be blamed on Google, which has two IPv6-capable destinations in the Web top 10: google.com itself and youtube.com, which gained IPv6 recently. However, most people with IPv6 connectivity will reach these over IPv4, because Google only discloses its IPv6 addresses to DNS servers that have been accepted into the Google over IPv6 program. Jason Livingood explains that Comcast saw a large spike in IPv6-reachable Web destinations "due partly to content owners adding Comcast DNS servers to their authoritative server whitelist."

Don't forget: we've used 3,026 million IPv4 addresses and have just 680 million to go, with 203 million used up in 2009. Time is running out.

Putting a computer science spin on genetic diagnostics

Collections of genetic profiles have continued to grow steadily, but scientists have struggled a bit with finding the most effective way to use them. In a paper published in PNAS this week, a group of researchers took one of the larger gene expression data repositories and sought to parse its disease-related data with a few computational techniques. They were able to use the resulting database in conjunction with a diagnostic program to accurately diagnose a given gene expression profile up to 95 percent of the time.

Gene expression data can be used to identify what differences in expression are likely to be connected to the presence of a certain disease. The formal association of a gene with a disease is known as an "annotation." However, getting the expression data and annotations into a usable form has been a challenge, and previous approaches have been limited to straightforward queries, asking the database to match a given profile or a phenotype. This approach leaves a lot of information untapped.

Scientists realized they could improve the usability of genetic databases by sorting their expression profiles into disease classes, and then querying the database with similar profiles. This would turn the databases into a predictive diagnostic tool—it would take gene expression profiles as input, find other matching profiles, and then check the matches for their disease annotations.

First, researchers standardized gene expression profiles by sorting them into a hierarchical system of disease classifications. They compared each diseased gene array result to a normal expression profile, and took the logarithm of the difference between them. This ratio of differences gave researchers profiles to work with that were standardized across a collection platforms and labs. They also evaluated the similarities between standardized profiles to identify correlations between gene combinations and diseases. Finally, they standardized the disease annotations associated with genes using the Unified Medical Language System.

Once their database of profiles was fully standardized, researchers created Bayesian classifiers for each disease grouping. Bayesian probability is based on evaluating the likelihood of one event given the probability of another, as well as the probability of a correctly positive test. For example, if a blood sample tests positive for cancer, Bayesian probability states that the the probability of that person actually having cancer is based on the accuracy of the test, the independent probability of someone getting cancer, and the independent probability of testing positive for cancer.

Classifiers like these allow the program to evaluate an expression profile based on disease prevalence in similar profiles. Aside from the number of variables it accounts for, Bayesian systems are also able to "learn" and take into account new information, which is ideal for a genetic database where new samples are being added all the time.

With the classifiers in place, the diagnostic database was ready to use. When it was fed a query profile to figure out what diseases the person behind the profile might be prone to, the database would assess the profile's similarity to others it had on record and pull up the relevant Bayesian disease classes. The program could then read out the annotated disease concepts that correlated with the query profile.

Overall, the system had a diagnostic accuracy rate of 95 percent, with a precision of 82 percent. Researchers found the accuracy of the results was significantly improved when they applied a second Bayesian step for error correction. They also found that more datasets produced much more accurate results—for example, a test for a rare disease that only had three datasets associated with it in the system had a diagnostic precision of only 41 percent.

In addition to diagnosing diseases, the database was also fairly adept at finding relationships between diseases and drugs, provided that profiles contained information on the effects of medications. The system was able to recover many known drug side effects, and also suggested new disease-drug relationships. For example, they were able to construct a disease drug map that linked an anticancer drug, doxorubicin, to skin disorders (the drug has a side-effect of skin inflammation) and to cardiovascular disease (it has a cumulative toxic effect on the heart over time).

Some have expressed apprehension about the use of genetic diagnoses, in part because their predictions are somewhat unreliable. This program could potentially overcome those concerns by making diagnoses more robust, and providing some quantification of the uncertainties.The authors note that the system's diagnostic accuracy and precision should continue to improve as more samples become available. Its creators also hope to integrate more phentoypes into the database, such as gene expression changes associated with stress responses and cell differentiation, possibly creating another map that could be overlaid on the genetic one to provide a different kind of predictive information.

Tuesday 23 March 2010

Nintendo's newest portable announced: the 3DS

With the Nintendo DSi XL landing in the offices of the gaming press this week, Nintendo saw fit to announce its newest product in its portable line: the Nintendo 3DS. The company gave limited details via a press release in Japan; we know the system will use two screens, won't require any sort of special glasses, and will be backwards compatible with current DS and DSi games.

The system will be released before the end of the fiscal year, which means the latest we'll see it in Japan is next March. The system is expected to make an appearance at this year's E3, and we'll surely be given more information before then. For now, Nintendo has yet to release any images of the system, or how games will look, or be played.

So how will the 3D effect be displayed? We posted a video of a downloadable game that's out now in Japan that uses head tracking to simulate a 3D image, and since then we've had time to try the game on a friend's Japanese DSi during GDC. By tracking the motion of the system in relation to your eyes, you seem to be able to peer "into" the picture by turning the system this way and that. It's a surprisingly effective effect, and some iteration of this system may be used in the 3DS.


Nintendo has a history of announcing hardware upgrades and features that may seem silly at first glance before going on to become huge success. Many scoffed at the idea of the Nintendo Wii, until lines to play the system at its first E3 showing stretched around the convention. 3D is fresh in the minds of consumers after the success of Avatar, and 3D-capable televisions are expected to make a splash at retail this year. A portable system that works with all your old games and won't require glasses? It could be the right product at the right time.

Browser ballot already hurting Internet Explorer market share

The first few weeks of the browser ballot, Microsoft's solution to put an end to the EU antirust case, has already resulted in Redmond's browser losing market share to its rivals, according to web stats firm StatCounter.

In France, IE usage has dropped by 2.5 percent, Italy by 1.3 percent, and the UK by 1 percent. Browser developers Opera and Mozilla have reported strong growth within Europe, with Opera claiming that downloads have doubled since the ballot was introduced, and a Mozilla spokesperson claiming, "We have seen significant growth in the number of new Firefox users as a result of the Ballot Choice screen :As the ballot is rolled out across the rest of Europe, Mozilla expects further gains to be made.

The ballot isn't universally popular. Although 12 browsers are offered, only the top five are immediately accessible. The remaining seven are only visible after scrolling horizontally. As the seven minority browsers expected, their presence in the ballot has done little to boost their market share. A spokesman for the Flock browser said, "To date, new downloads of Flock originating from the browser choice screen have only contributed marginally to growth in overall downloads. This is also the case for the other browsers not on the main screen."

The remaining browsers have petitioned the EU to try to get the ballot changed. For its part, Microsoft still maintains that the browser ballot is compliant with the EU's demands. With some 200 million European users due to be shown the choice screen, and the benefits of being included becoming increasingly clear, time is clearly of the essence for the seven smaller browsers.

Multicore requires OS rewrites? Well, maybe

A Microsoft kernel engineer, Dave Probert, gave a presentation last week outlining his thoughts on how the Windows kernel should evolve to meet the needs of the multicore future ahead of us. Probert complained that current operating systems fail to capitalize on the capabilities of multicore processors and leave users waiting. "Why should you ever, with all this parallel hardware, ever be waiting for your computer?" he asked.

Probert said that a future OS should not look like Windows or Linux currently do. In particular, he targeted the way current OSes share processor cores between multiple applications. He suggested that in multicore OSes, cores would instead be dedicated to particular processes, with the OS acting more as a hypervisor; assigning processes to cores, but then leaving them alone to do their thing. It might then be possible to abandon current abstractions like protected memory—abstractions that are necessary in large part due to the sharing of processor resources between multiple programs and the operating system itself.

The reason for this major change is, apparently, that it will improve the responsiveness of the system. Current OSes don't know which task is the most important, and though there are priority levels within the OS, these are generally imprecise, and they depend on programs setting priorities correctly in the first place. The new approach would purportedly improve responsiveness and provide greater flexibility, and would allow CPUs to "become CPUs again."

Probert is an engineer for Microsoft, working on future generations of the Windows kernel. He acknowledged that other engineers at Microsoft did not necessarily agree with his views.

At least, this is what has been claimed; the're one original report from IDG, and that's about the extent of it. The presentation was made at the Microsoft and Intel-sponsored Universal Parallel Computing Research Center at the University of Illinois at Urbana-Champaign, and the slides unfortunately appear to be available only to university attendees and sponsors. Either the report is missing some key point from the presentation that explains the ideas, or it's just not that surprising that Probert's Microsoft colleagues don't agree with him since, well, the suggestion just doesn't make a whole lot of sense.

The big reason that you might have to "wait for your computer" is because your computer hasn't done what you've asked it for. It's still loading a document, or rendering a Web page, or computing your spreadsheet, or something else. Dedicating cores to specific processes isn't going to change that—the problem is not task-switching overhead (which is negligible, and far, far quicker than human reactions can detect) or the overhead of protected memory. The problem is much simpler: programs are slow to react because the tasks they're doing take a finite amount of time, and if sufficient time has not elapsed the task will not be complete!

It's true that some programs do bad things like failing to respond to user input while they're performing lengthy processing, but that's bad coding, and dedicating cores to processes isn't going to do a thing to prevent it. That problem needs to be fixed by developers themselves. The broader problem—splitting up those tasks so that they can be computed on multiple processors simultaneously, and hence get faster and faster as more cores are available—remains a tough nut to crack, and indeed is one of the problems that the Parallel Computing Research Center is working on.

Most peculiar is the alleged claim that this model has "more flexibility" than current models. Current systems can already dedicate processor cores to a task, by having the OS assign a task to the core and then letting it run uninterrupted. But they can also multiplex multiple processes onto a single core to enable running more processes than one has cores (we call this "multitasking").

This isn't to say that operating systems won't undergo changes as more cores become more common. Windows 7, for example, included a raft of changes to improve the scaling of certain system components when used on large multi-core systems. But none of these changes required throwing out everything we have now.

It's not possible that we might yet have to do just that in order to get useful scaling if and when CPUs routinely ship with dozens or hundreds of cores. But unless something's missing from the explanation, it's hard to see just how a massive single-tasking system is the solution to any of our problems.

Monday 22 March 2010

Panasonic is winning the first round of the 3DTV wars

3D TVs dominated the show floor at CES 2010, and I spent a good deal of time trying out all of the models and approaches on offer from vendors large and small. My conclusion was that Panasonic's plasma-based approach was noticeably superior to the competition, so I wasn't surprised to learn that consumers seem to agree.

Bloomberg reports that Panasonic 3D TVs have already sold out at Best Buy, despite having launched as recently as March 10. In fact, there's reportedly a shortage of the TVs, and they're on backorder.

I honestly didn't expect consumers to snap up even the best 3D TVs, since I didn't find the experience to be compelling enough to pay a premium for. I'll admit that Avatar in 3D changed my mind, and I found myself thinking that when it comes time for me to replace my current plasma TV in a few years, I'll definitely pick up a 3D-capable model.

It looks like the 3D TV revolution will definitely happen, but unless the makers of LED-backed LCD models can find a way to boost the quality of their 3D experience, only one company stands to benefit from the change so far.

Sunday 21 March 2010

Ubuntu 10.04 beta 1 is looking good, less brown

Canonical has announced the availability of the first Ubuntu 10.04 beta release. The new version of Ubuntu, codenamed Lucid Lynx, is scheduled to arrive in April. It will be a long-term support (LTS) release, which means that updates will be available for three years on the desktop and five years on servers.

Although the Ubuntu developers have largely focused on boosting stability for this release, they have also added a number of noteworthy new features and applications. One of the most visible changes is the introduction of a new theme—a change that is part of a broader rebranding initiative that aims to update Ubuntu's visual identity.

Canonical's Ayatana team has continued its effort to overhaul the panel. Ubuntu 10.04 introduces a new application indicator system that will streamline the panel notification area. The panel has also gained a new menu—referred to as the Me Menu—for managing instant messaging presence and posting short messages to social networking Web sites. The social networking functionality is powered by Gwibber, my open source microblogging application, which was added to Ubuntu for version 10.04. Another application that's new in Lucid is Pitivi, a simple video editing tool. In a controversial move, the Ubuntu developers have decided to remove the GIMP, the popular image editing program.

The new theme has benefited from further refinement since its initial inclusion. Some of the more garish elements, like the strong hash marks on the scrollbars that we saw in the original version, have been smoothed out and made more subtle. Several bugs have also been addressed, such as the problems we previously encountered with OpenOffice.org menu highlighting. The Ubuntu Software Center has also gained an improved look that matches the new Ubuntu branding.








The beta is not quite ready for use in production environments, but it's already fairly robust and ready for widespread testing. You can download it from the Ubuntu Web site. If you would like to test it in a virtualized environment without having to change your current Ubuntu installation, you might want to try the TestDrive tool. For more details about 10.04 beta 1, check out the official release notes.

IE9, standards, and why Acid3 isn't the priority

Microsoft's development direction of Internet Explorer 9 is unambiguous: implementing HTML5 Web standards is the name of the game, with the intent of letting developers use the "same markup" to work everywhere. As IE General Manager Dean Hachamovitch said at MIX10 this week, "We love HTML5 so much we actually want it to work."

Redmond is targeting real-world applications based on real-world data. For example, every single JavaScript and DOM API used by the top 7,000 websites was recorded. IE9 will deliver support for every API used by those sites.

That obviously gives rise to a chicken-and-egg situation—what about the APIs that developers can't currently use because of a lack of widespread support, but would like to? Beyond the top 7,000 data, Microsoft has a number of HTML5 usage scenarios that it's targeting. The company has not said much on what those scenarios are, but given the demonstrations of HTML5 video and SVG animation, it seems that these are clearly viewed as core technology for a future HTML5-powered Web.

This dedication to HTML5 does not, however, mean that Microsoft is going to devote considerable effort to, for example, the SunSpider benchmarks or the Acid3 test. As the browser develops, the scores in those tests will likely improve (it currently gets 55/100, a marked improvement on IE8's 20/100), but they're not the number one priority. Acid3 is a scattergun test. It's not systematic—you can implement a high proportion of a particular specification and not pass the test, or a much lower proportion but still pass—and though many of the features it tests are useful, that's probably not the case for everything, and it's certainly not testing the one hundred most useful HTML5 features or anything like that.

More fundamentally, there are different degrees of "supporting a standard." Some demonstrations of the highly desirable and widely demanded CSS round borders helped explain this. The IE9 Platform Preview and WebKit both purport to support CSS3's rounded borders, and the Gecko engine (in Firefox) has an extension to provide rounded borders (the extension is nonstandard, but implemented in such a way as to not interfere with standard features). Rounded borders are something that developers are particularly keen on, since without CSS support, they have to be approximated with images, which is much less flexible (you can't easily change the colour or thickness of a border if it's done with images, for example). So in terms of desirability, they rank pretty high.

Unfortunately, they don't look consistent. At all:



These are two browsers that both support a feature. But they look completely different. This has two interpretations: either one or both of the browsers is wrong, or the specification is lousy (such that both browsers are doing what the specification says, even though it's surely not what any developer would want). In general, this kind of discrepancy isn't something that a test like Acid3 will reveal. It needs systematic, thorough suites of tests that verify each individual part of the specification, and ensure that the different parts of the specifications work together.

In developing these tests, sometimes errors in the specification will be revealed. But it's also likely that errors in implementations will be revealed, even implementations that are widely perceived to "support" feature X or Y. Acid3 can't show just how much of the HTML5 standards a browser supports. It can't even tell you very much about which parts of the standards aren't supported. To do these things requires much more thorough testing.

It's for this reason that Microsoft is continuing the work it did for Internet Explorer 8. With IE8, Microsoft developed, and delivered to W3C, a huge library of CSS 2.1 tests. Systematic testing was the only way to ensure that the browser truly lived up to the demands of the specifications. So for IE9, the company is developing a new raft of tests, the first batch of which have already been submitted to W3C. Microsoft doesn't want IE9 to have the same kind of test results as other browsers presently do; a feature isn't done until all the tests pass.

A case could be made that these other browsers are perceived as more compliant than they really are; while there are certain browsers that excel in certain areas (Opera's SVG support has long been extensive), other browsers also have considerable gaps. Sure, not as big as the gaps that IE8 presently has, but substantial nonetheless. All vendors clearly have plenty of work to do before the "same markup" goal really becomes reality.

Scoring well on the SunSpider JavaScript benchmark is similarly not an explicit target for IE9. SunSpider is useful, and tests JavaScript performance in many ways, but just as real web pages aren't written like the Acid3 test, real web applications aren't written like SunSpider. Real applications do things like optimize their design so that the basic page loads quickly, and then complex activities happen asynchronously in the background. SunSpider doesn't really test this style of development, and yet this is how real applications actually work.

This doesn't make SunSpider bad, but it explains why it's not a priority. It would be a mistake to optimize specifically for SunSpider, as SunSpider is not representative of real-world usage. Developers should optimize for reality, not for specific microbenchmarks.

Microsoft wants its HTML5 support to be stable and robust. This means that Internet Explorer 9 is unlikely to support every single part of the various specifications that make up HTML5; some parts are presently too much of a moving target to be viable. Other parts may be stable, but not relevant to the scenarios that the company is using to guide its development effort. But what the company will deliver will be thorough in a way that isn't necessarily the case with other browsers. Clearly the company has an up-hill struggle if its browser is to be perceived as highly conformant with Web standards. But it's certainly heading in the right direction.

12-core Mac Pros, 27" Cinema Display may be coming soon

Apple has been very busy on the mobile front, with the iPad launching in two weeks and iPhone OS and hardware upgrades expected this summer. However, Apple hasn't forgotten about its Mac business—sources for AppleInsider report that long overdue updates to Apple's Cinema Display and Mac Pro will be also appear by June.

Expected to join the 24" LED Cinema Display that Apple launched in October of 2008 is a 27" LED Cinema display based on the same panel currently used in the 27" iMac. Issues with the panels caused problems for Apple that resulted in shipping delays for the 27" iMac, though those problems have been rectified. The 27" LED Cinema Display has the same resolution as the current 30" Cinema Display, though it is 16:9 instead of 16:10. Its introduction should finally lay to rest the 30" model, which hasn't been updated in three years.

Apple is also said to be wrapping up an update to its Mac Pro workstation towers, which have only gotten a slight speed bump since they were introduced well over a year ago. Apple has been waiting for Intel to release new 32nm Xeon parts, codenamed "Westmere-EP," which were officially launched this week. These 5600-series Xeons have six cores compared to the quad-core parts used in current Mac Pros. The process shrink from 45nm offers a 60 percent performance boost while maintaining the same power requirements of previous Xeons.

A Core i7-980X Extreme Edition processor, codenamed Gulftown, may be used in the lower-end single processor Mac Pro model. However, there are slight architecture differences between the Core i7 and Xeon variants. Apple may simply offer a single Xeon option as it does now.

Apple is also dealing with the issue that MacBook Pros have also not been updated in some time, despite the fact that mobile Core i3, i5, and i7 parts have been available since January. The delay may be due at least in part by licensing issues. These issues have prevented NVIDIA from building integrated controllers, like the 9400M used in all of Apple's current portables, for Intel's newer processors. However, NVIDIA's Optimus platform may provide the solution to work around the problem and maintain the MacBook Pro's seven-hour battery life.

Additional delays may also be caused by constrained supply of Intel's mobile processors. Intel is reportedly giving priority to "major clients," according to sources for DigiTimes, so our hope is that Intel counts Apple on that category.
Apple CEO Steve Jobs promised a number of exciting product introductions this year at the most recent quarterly earnings call. The coming months might give us a virtual cornucopia of new Macs to choose from.