Friday 9 April 2010

Early IE9 Platform Preview results show promise

We've argued that Microsoft needs to engage more with Web developers to give a better understanding of what the company is doing with its Web browser, allow them to provide feedback throughout the development process, and more broadly get them engaged with the development process. With Internet Explorer 9's Platform Preview, Microsoft has indeed taken steps to do exactly this. Though Microsoft still isn't releasing the nightly builds that other browsers offer, the Platform Preview definitely represents progress; it provides early access to IE9's core rendering and JavaScript engines, and will be updated approximately every eight weeks.

The eight-week cycle was chosen because Redmond felt this provided the best trade-off between getting regular updates into developers' hands, ensuring that the preview releases are reasonably robust, and getting useful feedback that integrates well with Microsoft's own development processes. Each version will undergo reasonably extensive testing during the eight-week period, giving ample opportunity for bugs to be filed. For its part, the IE team has committed to investigating every single bug filed, and resolving all that it can.

Indications are that the Platform Preview has thus far been quite successful. Sources close to the matter state that some 700,000 copies of the preview have been downloaded, and that interest has been global in spite of the preview being a US English-only release. The top three bug report areas are SVG, compatibility, and then CSS, which certainly indicates that developers are taking an interest in testing the browser's new features and ensuring they work correctly. Of the hundreds of bugs filed thus far, the same source says that around 60 percent of them have been addressed by the development team.

These download numbers are substantial, and it could be make a good case in favor of this slower release strategy. The Mozilla group's weekly Status Meetings include information about the number of users of different prerelease versions of Firefox. During the lead-up to the release of Firefox 3.5, several hundred thousand people used the major betas, but the nightly releases were much less used, with perhaps 10,000-15,000 users. The Platform Preview is not anything near as usable as the Firefox betas, so 700,000 downloads is a strong showing.

Ars talked with IE General Manager Dean Hachamovitch briefly about the Preview. He said that IE9 testers seemed particularly interested in the browser's performance and graphical capabilities, as these areas of the IE 9 Test Drive site had received most traffic. This is also reflected in some third-party reactions. NVIDIA recently used the IE9 Platform Preview to promote the graphical capabilities of its new Ion2 platform—capabilities that IE9 can, of course, exploit due to its extensive hardware acceleration (the same hardware acceleration that leaves Windows XP users out in the cold).

Stability vs. automation

Microsoft's approach to browser development shows a strong commitment to the ideal of a stable, versioned platform, something that developers could reliably target for consistent results. This attitude is carried through to the Platform Preview. Unlike, say, Chrome's dev channel which automatically updates about once a week, ensuring that developers always have access to an up-to-date version, the IE9 Platform Preview will require manual updating. Though this has the obvious downside that developers might forget or not notice that a new version has been released, and hence may result in testing of obsolete versions, it maintains the important (to Microsoft) notion of predictability. With Chrome, a page might work one day and be broken the next by an automatic update, a behavior that can be confusing at best. Microsoft doesn't want IE developers to face a similar experience; instead, they will have to take deliberate action to update.

On the other hand, automatic updates are, in a sense, part-and-parcel of the Web experience. Websites can change their appearance overnight, and while this can be confusing to some, it's an unavoidable fact of Internet life. The case can certainly be made that browsers should follow websites' lead and update themselves; this gives users a browser that (by and large) keeps on getting better—faster, more capable, and with new features. Though occasional regressions will happen (wherein a new version breaks something that used to work) a robust development process should make these rare.

These advantages were acknowledged by Hachamovitch, especially for the more savvy, technically aware user (that is, users who are unlikely to be fazed by new features and improvements appearing within their browser automatically). But the company still prefers to take the more conservative approach to avoid problems of users being surprised by changes.

Hachamovitch was noncommittal about what the second Platform Preview release would contain when released next month. At MIX10 we saw demonstrations of HTML5 video in a build of IE9. The version released last month, however, didn't include video support. HTML5 video is surely one of the most eagerly anticipated IE9 features. It will ship in a Preview release eventually, but we do not know when.

The Platform Preview program is still in its early stages, and this is the first time that Microsoft has developed its browser in this way; there may yet be refinements to the program as both the company and third parties learn the best way to work together, and there's no news yet of what will happen once IE9 moves into beta.

Thus far, at least, it looks like the scheme has been successful at getting third-party developers involved with IE9's development, which has to be good news for both sides.

Microsoft offers much-needed fix for Windows OSS development

Although Microsoft is beginning to acknowledge that the rich ecosystem of open source software can bring a lot of value to Windows users, the most popular open source software projects are largely developed on other platforms, which means that they aren't always easy to deploy on Windows. A relatively complex open source server stack can be rolled out on Linux with a few clicks, but it might take hours to get the same software installed and properly configured on Windows.

Microsoft developer Garrett Serack has identified a compelling solution to this problem. He is launching a new project to build a package management system for Windows with the aim of radically simplifying installation of popular open source software on Microsoft's platform. He calls it the Common Open Source Application Publishing Platform (CoApp).

Much like the package management systems that are a standard part of popular Linux distributions, the CoApp project will provide a delivery platform for packaged open source software libraries and applications, with support for dependency resolution and automatic updates. It could be a powerful tool for system administrators who want a WAMP stack or developers who want to port Linux applications to Windows.

Serack wants to use Microsoft's MSI format for the packages and intends to take advantage of WinSxS in order to deliver parallel binaries so that users will have access to multiple builds of the same library generated by different compilers. The project will also seek to establish some basic standards for filesystem layout so that files are put in consistent places.

He is coordinating the project with Microsoft's blessing, but the actual development effort will be community-driven—an approach that will hopefully enable CoApp to evolve in a way that best serves its users rather than being directed by Microsoft.

"The folks here at Microsoft have recognized the value in this project—and have kindly offered to let me work on it full-time. I'm running the project; Microsoft is supporting my efforts in this 100%," he wrote in a blog entry about the project on Wednesday. "The design is entirely the work of myself and the CoApp community, I don't have to vet it with anyone inside the company."

Making open source development on Windows suck less

Having ported several of my own Linux applications to Windows, I know from personal experience how insanely difficult it can be to set up a proper environment for developing open source software on Microsoft's operating system. For the last Qt application that I ported, the process of getting all of the dependencies installed took hours. I had to install CMake, find just the right version of Automoc, and compile OpenSSL from source.

My current Windows VM has a half a dozen different build systems and three separate sets of version control tools, all of which had to be installed individually. I also have two completely separate installations of MinGW and a rather heavy Cygwin setup. I need all of this crap in order to port my software to Windows, but it's a nightmare to maintain. I have to meticulously document every step of the setup process if I ever want to be able to do it again on a different Windows installation.

These headaches are enough to deter many open source software developers who would otherwise be releasing Windows versions of their applications. Spending a few hours developing on Windows often serves as a painful reminder of how much I depend on my distro's super cow powers. That is why I'm convinced that CoApp is a very good idea.

Cygwin arguably constitutes a package management system by itself, but it tends to be somewhat insular and isn't very native. Serack believes that CoApp offers an opportunity to do it the right way and close the gaps that isolate ported open source software components from the rest of the Windows ecosystem. If it's done properly, that could be very significant.

Although Linux enthusiasts tend to disdain Windows, porting Linux applications to Microsoft's operating system can open up a lot of opportunities. A Windows port can expose your application to a whole new audience, making it possible to attract new contributors. We have seen a number of prominent open source software projects benefit in that manner from Windows compatibility in the past.

A positive side effect of that phenomenon is that it introduces Windows application developers to open source frameworks and technologies. Broader adoption of cross-platform Linux-friendly software and toolkits on Windows would obviously help boost the availability of software for Linux.

Although I'm really impressed with Serack's vision, I'm a bit skeptical that a task of such magnitude and complexity can be fulfilled to an extent that would truly deliver on its potential. Such an undertaking will require considerable manpower. Ars readers who want to participate in the project or learn more can check out the CoApp page on Launchpad.

Thursday 8 April 2010

Inside WebKit2: less waiting, less crashing

Anders Carlsson, an Apple employee, announced today on the WebKit mailing list an evolution of the WebKit project called WebKit2.

WebKit2's major aims are to bake both a "split process model" and a non-blocking API into the WebKit product—and by extension into Safari and any other client which takes advantage of the WebKit2 framework.

Google's Chrome browser launched in late 2008 with what's called a split process model, one in which each WebKit view runs in its own process. This means that when a bad plugin or a bug in the renderer causes an error, only that tab will crash instead of the entire browser.

IE8 has a similar system and Firefox is exploring one as well with Electrolysis. Apple's Safari 4 gained a related feature under Mac OS 10.6, which runs plugins like Adobe's Flash in a separate process and prevents the whole browser from crashing due to plugin faults. WebKit2 validates that approach by building support for split processes directly into the rendering engine.

Out of my way!

Another goal of WebKit2 is to implement the API to the framework in a completely non-blocking manner. This means that developers can hook into API methods with various kinds of callbacks in order to receive notifications from WebKit views.

For example, in my application I might want to load a webpage. I would call the loadWebsite method (which I just made up), pass it my URL, and additionally specify a callback method or block of code I would like to have attached to a particular event—say didFinishLoadForFrame (which I did not make up).

Whenever the webpage is done rendering, my code would be executed or my method called or pinged. This should result in much more responsive applications which hook into WebKit2. Theoretically, while the renderer is rendering something, the main application loop can move on to doing something else as requested by a user.

There are three techniques currently implemented to achieve this goal: notification-style client callbacks, policy-style client callbacks, and policy settings. A fourth method, code injection which can interact directly with the DOM, is proposed but not yet implemented. These are described in more detail on the project page.

The neat thing about Apple's implementation is that these abilities can be used and exploited by all downstream clients of WebKit2, since they are baked right into the framework—in contrast to the tack taken in Google Chrome. The end user of an application that uses WebKit2 will get a more stable product and developers can take advantage of these enhancements without having to implement their own solutions or jump through unnecessary hoops.

WebKit2 is currently supported on Windows and OS X—the two platforms on which Apple deploys Safari. Linux support is not mentioned at this time.

Wednesday 7 April 2010

IBM breaks OSS patent promise, targets mainframe emulator

IBM is threatening to pursue legal action against TurboHercules, a company that sells services relating to the open source Hercules project, an emulator that allows conventional computers with mainstream operating systems to run software that is designed for IBM System Z mainframe hardware.

In a letter that IBM mainframe CTO Mark Anzani recently sent to TurboHercules, Big Blue says that it has "substantial concerns" that the Hercules project infringes on its patents. The letter is a brusque half-page, but was sent with nine additional pages that list a "non-exhaustive" selection of patents that IBM believes are infringed by the open source emulator.

This move earned the scorn of well-known free software advocate and patent reform activist, Florian Mueller. In a blog entry that was posted Tuesday, Mueller fiercely criticized IBM, accusing the company of abusing its patent portfolio and harming open source software in order to retain monopolistic control over its expensive mainframe offerings.

"After years of pretending to be a friend of Free and Open Source Software (FOSS), IBM now shows its true colors. IBM breaks the number one taboo of the FOSS community and shamelessly uses its patents against a well-respected FOSS project," wrote Mueller. "This proves that IBM's love for free and open source software ends where its business interests begin."

He contends that IBM's support for open source software is insincere. As evidence of the company's hypocrisy, Mueller points out that two of the patents that IBM listed in its letter to Hercules are included in the list of 500 patents that IBM promised to not assert against open source software in 2005. Mueller is convinced that the patent promise was a manipulative attempt to placate government regulators.

How emulation intersects with IBM's mainframe business

IBM's position in the mainframe market has posed contentious antitrust issues for years. The company's software licensing model ties its mainframe operating system to its underlying System Z hardware, guaranteeing that companies who have built their own applications for the platform can't easily migrate to other hardware options. This lock-in strategy has proved lucrative for IBM, generating billions of dollars in revenue.

Despite the extremely high cost and the fact that some companies don't necessarily derive value from the hardware's unique characteristics, they continue buying IBM's mainframe solutions because doing so remains cheaper than rewriting all of their legacy applications. We explained this phenomenon several years ago when we looked at the reasons why IBM's mainframe business is still profitable despite the declining relevance of the technology.

A well-designed System Z emulator that allows users to migrate their own mainframe applications to commodity hardware would obviously pose a serious threat to IBM's mainframe business, but IBM's software licensing terms have historically prevented such a threat from materializing. Users would have to run IBM's mainframe operating system inside of the Hercules emulator in order to run the applications—but they aren't allowed to do that.

It's certainly possible to run modern versions of IBM's mainframe operating system with Hercules, but it can't really be officially supported or publicly condoned by the project's developers due to the licensing issues. Much like Hackintoshing, it is fairly trivial on a technical level but constitutes an unambiguous violation of the end-user license agreement. As such, Hercules never really posed a threat to IBM in the past. The legal issues simply preclude commercial adoption and deployment in production environments. Hercules is principally used by enthusiasts to run z/Architecture Linux variants, an activity that doesn't erode IBM's lock-in strategy.

In many ways, the project arguably benefits IBM by encouraging interest in the mainframe platform. That is largely why IBM has shown no hostility towards Hercules in the past. In fact, IBM's own researchers and System Z specialists have lavished Hercules with praise over the years after using it themselves in various contexts. The project was even featured at one time in an IBM Redbook. What brought about IBM's change in perspective was an unexpected effort by the TurboHercules company to commercialize the project in some unusual ways.

TurboHercules came up with a bizarre method to circumvent the licensing restrictions and monetize the emulator. IBM allows customers to transfer the operating system license to another machine in the event that their mainframe suffers an outage. Depending on how you choose to interpret that part of the license, it could make it legally permissible to use IBM's mainframe operating system with Hercules in some cases.

Exploiting that loophole in the license, TurboHercules promotes the Hercules emulator as a "disaster recovery" solution that allows mainframe users to continue running their mainframe software on regular PC hardware when their mainframe is inoperable or experiencing technical problems. This has apparently opened up a market for commercial Hercules support with a modest number of potential customers, such as government entities that are required to have redundant failover systems for emergencies, but can't afford to buy a whole additional mainframe.

IBM's response

As you can probably imagine, IBM was not at all happy with that development. Following IBM's initial threats of legal action, Hercules retaliated by filing an antitrust motion in the European Union, calling for regulators to unbundle IBM's mainframe operating system from its mainframe hardware. IBM responded harshly last month, claiming that the antitrust motion is unfounded and that a software emulation business is just like selling cheap knock-offs of brand-name clothing. The conflict escalated, leading to the patent letter that was published on Tuesday.

When faced with patent litigation, companies often try to keep the conflict quiet and hope for an out-of-court settlement because they don't want the threat of a lawsuit to scare away potential customers. That's certainly a factor here, because IBM's threats raise serious questions about whether it is truly permissible to use the Hercules emulator in a commercial setting. TurboHercules is taking a bit of a risk by disclosing the letter to Mueller for broad publication. The startup likely chose to publicize their predicament with the hope that the open source software community will notice and respond by shaming IBM into backing down.

As Mueller points out in his blog entry, the broader open source software community has some reasons to side with TurboHercules in this dispute: some of the patents cited by IBM cover fundamental functionality of virtualization and emulation. Those patents reach far beyond the scope of Hercules and could pose a threat to other open source software projects.

Windstream in windstorm over ISP's search redirects

Responding to a medium-sized uproar, Windstream Communications says it is sorry about those customer searches performed by Firefox users and redirected from Google to its own search engine, and the Little Rock, Arkansas-based ISP has now got the situation under control.

"Windstream implemented a network change on Friday, April 2 that affected certain customer Web browser search box queries, producing search results inconsistent with Windstream's prior practices," a spokesperson for the voice/DSL service told us. "Windstream successfully implemented configuration changes today to restore original functionality to these search queries after hearing from affected customers."

The question, of course, is whether the company accidentally or deliberately rigged its network software to produce those "inconsistent" results. We asked, but not surprisingly didn't get an answer to that query.

Not the behavior I expect

As Ars readers know, there's money to be made from the typing errors of Web users. Input a slight misspelling of a popular domain name and you'll wind up at an ad-saturated site designed to harvest all such instances. Then there are the Internet service providers that take this business one step further. Screw up a domain by a single character and you wind up at an ISP-sponsored or -partnered search engine, complete with ads on the site waiting for your impression or click.

It appears that Windstream inadvertently or deliberately took this activity to the next level, according to its own statement and the complaints of some of its customers, reproduced on the Windstream forum page of DSL Reports. Here's one protest:

"Dear Windstream,
For future reference: When I use google via the firefox search bar I actually want to go to google not » searchredirect.windstream.net/
This redirect happens in both windows and linux even if dns is hard set in router and tcp/ip settings
It took me 45 minutes to figure out how to disable this 'feature'
» searchredirect.windstream.net/prefs.php you can disable this 'feature' here
Honestly this isn't the kind of behavior I expect out of my isp and I consider it very unprofessional."

To these forum concerns a Windstream support person initially posted this reply: "We apologize as this is an issue that we are aware of and are currently working to resolve. You should not be getting that redirect page when you are doing your searches. We should have this resolved soon." Next Windstream's Twitter page declared the problem is fixed: "Windstream has resolved unintentional issues with Firefox search. Apologies for the troubles you've had."

But this episode raises some serious worries, among them: how much should your ISP be allowed to monkey around with your Web browsing activity under any circumstances? Free Press has already called for the Federal Communications Commission to investigate this affair.

"If initial allegations are true, Windstream has crossed the line and is actively interfering with its subscribers' Internet communications," the reform groups' S. Derek Turner declared. "Hijacking a search query is not much different than deliberately ‘redirecting’ a user from NYTimes.com to WashingtonPost.com and a limited 'opt-out' capability is not enough to justify Internet discrimination. This is further proof of the need for strong open Internet rules, comprehensive transparency and disclosure obligations, and a process for relief at the FCC."

The issue has been resolved

Ars asked Windstream about these concerns. Not surprisingly, the ISP isn't crazy about the probe idea. "We don't think an investigation is necessary since the issue has been resolved," the company told Ars.

In truth, we'd be a bit surprised if the FCC jumped on this conundrum too quickly. Everybody's waiting to hear what the DC Circuit Court of Appeals has to say about the Commission's authority to sanction Comcast for P2P blocking, and most observers don't expect it to go well for the agency [update: the court has ruled]. As the Free Press statement suggests, the FCC's authority around these ISP issues is still a work in progress.

But given that ICANN has already condemned the practice of ISP redirection in the case of misspelled or nonexistent domain names, it doesn't seem like we've heard the last about this issue. Indeed, Windstream's quick response to subscriber complaints suggests the service knows that the watchdogs are watching. Windstream's latest repairs of its search system "do not require customers who chose to opt-out to do so again," the ISP assured us.

Court: FCC had no right to sanction Comcast for P2P blocking

The FCC's decision to sanction Comcast for its 2007 P2P blocking was overruled today by the US Court of Appeals for the DC Circuit. The question before the court was whether the FCC had the legal authority to "regulate an Internet service provider's network management practice." According to a three-judge panel, "the Commission has failed to make that showing" and the FCC's order against Comcast is tossed.

When the complaints against Comcast first surfaced, they noted that the company was violating the FCC's "Internet Policy Statement" drafted in 2005. That statement provided "four freedoms" to Internet users, including freedom from traffic discrimination apart from reasonable network management. The FCC decided that Comcast's actions had not been "reasonable network management," but Comcast took to the agency to court, arguing that the FCC had no right to regulate its network management practices at all.

The Internet Policy Statement was not a rule; instead, it was a set of guidelines, and even the statement admitted that the principles weren't legally enforceable. To sanction Comcast, the FCC relied on its "ancillary" jurisdiction to implement the authority that Congress gave it—but was this kind of network management ruling really within the FCC's remit?

The court held that it wasn't, that Congress had never given the agency the authority necessary to do this, and that the entire proceeding was illegitimate. The FCC's "Order" against Comcast is therefore vacated; Comcast wins.

The decision wasn't a surprise; during oral argument earlier this year, the judges pressed the FCC's top lawyer repeatedly. The Policy Statement was "aspirational, not operational," they said; the FCC had not identified a "specific statute" Comcast violated; and the FCC "can't get an unbridled, roving commission to go about doing good."

Comcast pledged some time ago to change the way it handled traffic management, and it has already transitioned to a protocol-agnostic approach to congestion.

Tuesday 6 April 2010

Canonical announces phone sync for Ubuntu One subscribers

Canonical, the company behind the Ubuntu Linux distribution, announced today that its Ubuntu One cloud service will soon gain support for mobile contact synchronization. The feature will be available to users who are paying for the higher tier of Ubuntu One service.

Canonical officially launched the Ubuntu One service last year alongside the release of Ubuntu 9.10. The service allows users to keep files and some application data synchronized between multiple computers. The company is planning to roll out several significant new Ubuntu One features when Ubuntu 10.04, codenamed Lucid Lynx, is released later this month. The new Ubuntu One music store, which is integrated into the Rhythmbox audio player, will use Ubuntu One to deploy purchased music to all of the user's computers. Much like the music store, the new mobile synchronization features are opening up for testing, but will officially launch alongside Ubuntu 10.04.

Ubuntu One mobile synchronization is powered by Funambol, a mobile push synchronization platform that is partly distributed under open source software licenses. Ubuntu One contact synchronization will work on the wide range of devices that are supported by Funambol's client software. You can download the synchronization program for your specific device by selecting it at the beta phone page on the Ubuntu One website.

Canonical is also releasing its own branded Funambol-based mobile client applications for certain platforms. For example, the company is offering an Ubuntu One contact synchronization program for the iPhone, which is now available from the iTunes stores. Plugins are available for several desktop applications too, such as Thunderbird.

The underlying Funambol technology supports push synchronization for calendars, notes, and other kinds of data, but Ubuntu One's mobile sync only supports contacts at the present time. It's possible that its scope will be expanded as the service evolves.

Canonical developer Martin Albisetti described the new mobile sync feature in an announcement today on the Ubuntu One users mailing list.

"Getting contacts on CouchDB and replicating between desktops and the cloud was the first big step. The second, and much bigger step, is to actually get those contacts from and to mobile phones. To achieve this, we have partnered with a company called Funambol, who share our views on open source, and have an established a proven software stack that synchronizes thousands of mobile phones and other devices," Albisetti wrote. "Right now we're at a stage where we feel confident opening up the service for wider testing. We strongly recommend that [testers] have a backup of [their] contacts since we've only tested with a hand-full of phone models at this point."

Although the service is intended for paying Ubuntu One customers, nonsubscribers will get an opportunity to test it for free during a 30-day trial period. Albisetti says that the free trial will start following the release of Ubuntu 10.04. Right now, Canonical is seeking help from the user community to test the service. He encourages users to provide feedback in the #ubuntuone IRC channel and on the Ubuntu One mailing list.

When we reviewed Ubuntu 9.10 last year, we noted that the lack of mobile synchronization was one of the most glaring deficiencies of the Ubuntu One service. Many users already get contact synchronization for free through Google and other providers, but the feature could still potentially help make an Ubuntu One paid plan seem compelling to some regular end users.

HP Slate pricing and specs leak: Atom CPU, 1080p video

It looks like HP employees have been given an internal HP Slate presentation comparing it to the Apple iPad, according to a slide obtained by Engadget. The device will cost either $549 for the 32GB flash storage version or $599 version for the 64GB version.

Both versions sport a 8.9-" 1024 x 600 capacitive multitouch display, a 1.6GHz Atom Z530 processor with UMA graphics, an accelerator for 1080p video playback, and 1GB of non-upgradeable RAM. They'll also include a two-cell five-hour battery, an SDHC slot, two cameras, a USB port, a SIM card slot for the optional 3G modem, and a dock connector for power, audio, and HDMI out. The included Windows 7 edition will be Home Premium.

Those are the unofficial details, anyway. Three months ago, HP went on record to explain how the project started and gave some vague details on the product: thin, light, somewhere between 4 to 10 inches, and be able take on the e-reader market, currently dominated by Sony and Amazon, head on. 2010 is the year for slates, HP said, and that's thanks to a convergence of low-cost and low-power processors as well as the touch-aware Windows 7. The company would only confirm that the tablet would be out this year, would be running Microsoft's latest and greatest, and will cost you less than $1,500.

The questions that have yet to be answered either by HP or by rumors are mainly around what software it will feature and how exactly Windows 7 will be customized to work on the slate (though the slide does mention "HP touch-optimized UI").

HTML5 and WebGL bring Quake to the browser

The developers behind the GWT Java framework have implemented a port of Quake 2 that runs natively in modern Web browsers. It takes advantage of recent innovations in emerging standards-based Web technologies such as WebGL and WebSockets.

GWT is designed to enable Web application development with Java. Developers can benefit from Java's static typing and more rigidly structured architecture. It generates the requisite JavaScript code that is needed for the application's client-side components. GWT powers several high-profile Google Web applications, including Google Wave. The GWT developers implemented browser-based Quake by using a Java port of the Quake 2 engine on top of GWT.

GWT and the Java-based Quake engine both had to be extended and modified extensively in order for the pairing to work, but the effort paid off. It serves as a compelling example of how emerging standards are becoming increasingly capable of delivering all of the necessary functionality for interactive 3D network gaming.

As some readers might remember, Google released a Quake demo for Native Client (NaCl) when the plug-in was first announced in 2008. The state of open Web technologies has clearly advanced since that time. It's no longer necessary to rely on plugins to deliver this kind of functionality.

Monday 5 April 2010

It looks like time to build an Atlantic seaboard wind grid

One of the greatest challenges of integrating renewable power into the US grid is its intermittent nature. This is especially true for wind power, which is prone to rapid fluctuations that can leave utilities scrambling to either add or dump power. But the temptation of wind is large—the US has wind resources to cover 23 times its current electric use—and that has led to many ideas about how best to deal with the erratic supply. A study that will appear in PNAS later this week suggests a radical solution: connect offshore wind up the entire Eastern Seaboard of the US into a single, huge, baseline generating system.

The authors of the new study note that, currently at least, the fluctuations in wind power are handled by redundant generating and transmission equipment, which generally involves the burning of fossil fuels when the wind slacks off. One of the two major alternatives currently under consideration involves the use of energy storage, either large, on-grid facilities, or ad-hoc aggregation of the excess capacity in items like electric vehicles. The final option, and the one they consider, is the potential to aggregate geographically diverse collections of wind farms.

They're hardly the first to consider this prospect, and a variety of other studies have examined the potential of distributed wind in specific locations. So, for example, a study of wind potential in the UK found a tendency for the entire geographic area to experience similar wind conditions, meaning that even a dispersed generating system might not work there.

The new study builds on the earlier work by considering why this is the case. The UK is about 1,100km along its north-south axis, and the high pressure systems that bring it low wind tend to be roughly 1,000km in size. In contrast, the US East coast is roughly 2,500km in length, and has a tendency to spawn storms that move up the coast in a roughly northeasterly direction. Many states along the coast are already in the planning or permitting stages for large offshore wind facilities (New Jersey alone is considering at least three) that will total over a TeraWatt in capacity, assuming they're all built.

So, for the new analysis, the authors considered a total of 11 sites on the continental shelf, ranging from the Florida Keys to the Gulf of Maine. Wind speed data was available over a five-year period for each of the sites, available with one-hour resolution between readings.

As expected, sites close to each other showed a fair degree of correlation in wind speeds—if the winds had died at one, they were likely to be dead at a site that was relatively nearby. By the time sites 750km apart were considered, however, the correlation had dropped below 0.2, and it dropped below 0.1 for sites over 1,300km apart. As expected, this means that the large geographic spread of the sites means that they're unlikely to be hit by a single weather system that causes a synchronized rise or fall in production.

The authors didn't seem much in the way of negative correlations, however, where a lack of wind in one site would typically mean high winds in a different one. Still, the aggregated wind power was very stable. Although production from individual sites would rise and fall by as much as 50 percent within an hour, the aggregate as a whole rarely saw changes greater than 10 percent. Throughout the entire period, the ensemble was above 5 percent of its rated capacity except for a grand total of 20 days, and never dropped to zero. For the most part, output was typically near the middle of the capacity range.

The authors also analyzed individual weather events, including one where a large anticyclonic system was parked over North Carolina. Although the center of the seaboard was largely quiet, stations in Florida saw strong westward winds, while those in New England had a strong eastward wind, exactly as the authors had predicted. All 20 days of low production were analyzed, but the authors conclude there's no pattern involved; each of the instances was the product of unique circumstances.

The authors seem rather interested in the idea of actually physically connecting all the sites with high-capacity undersea cables into what they term the Atlantic Transmission Grid. At roughly $4 million a mile to install, this would still account for less than 15 percent of the total price of the full system of wind farms, a figure that's in line with building redundant generating capacity onshore. Still, there would seem to be advantages to building the interconnect on land, where it would be easier to service and could integrate other intermittent power sources, like solar.

There's also a certain irony to the fact that the authors suggest that planning and licensing the Atlantic Transmissions Grid would be simpler because it involves a single nation. While that's true to an extent, the grid would have to service a patchwork of local utilities, and incorporate sites in states that have very different perspectives on (and legislated requirements for) renewable power.

New server platform and 12-core Opteron keep AMD in the game

The x86 server wars heated up significantly in March, with the end of the month seeing a major processor launch from each vendor: AMD launched its 12-core Opteron 6100 processor, codenamed Magny-Cours, on the 29th, and Intel then finished off the month with the launch of the 8-core Nehalem EX Xeons.

These were pretty major launches, but I've covered Nehalem EX previously so I want to focus on AMD this time around.

AMD actually launched ten different processors at a range of clockspeeds (1.7 to 2.3GHz) and core counts (8 and 12); all of these parts make up the Opteron 6000 series, which is aimed at two- and four-socket configurations. These two configs represent the bulk of the server market, and AMD is aiming to be the value player here.

In terms of microarchitecture, the new Opterons don't differ significantly from their predecessors, or indeed from the previous few generations. The addition of support for virtualized I/O is the main change a the core level, a change that brings AMD up to par with Intel's Nehalem parts in their level of virtualization support.

At the level of what I'd like to call "macroarchitecture"—meaning the arrangement of cores, I/O, cache, and other resources on the processor die—there are some significant improvements.

The shared cache for the 8-core parts is 17.1MB, while the 12-core weighs in at 19.6MB.

On the memory front, the new Opterons boast support for four channels of DDR3—that's a lot of aggregate memory bandwidth across two or four sockets. For I/O, each package has four HT 3.0 (x16) links; this amount of I/O bandwidth is needed because there are so many cores per socket. In fact, moving out to the system level, you can see where AMD put most of its engineering effort.

DirectConnect 2.0

One of the key ways that AMD is amping up the bang-per-buck is by taking a route that it had previously made fun of Intel for: sandwiching two n-core dies into a single package (a multichip module, or MCM) and calling the resulting part a 2n-core processor. The 12-core is two six-core dies, and the 8-core part is two four-core dies (actually, it's two six-core dies with some cores disabled, an approach that helps get yields up and costs down).

Back when Intel started doing this MCM-based multicore approach in the waning days of the Pentium 4 era, its impact on system architecture was a lot more straightforward. But AMD's NUMA system architecture, where the on-die memory controller means that the memory pool is distributed among all the sockets in a multisocket system, complicates the MCM approach. This is because the number of NUMA nodes no longer equals the number of sockets. AMD's answer to this problem is called Direct Connect 2.0.

Take a look at the diagram below, which shows the I/O and memory buses in the Magny-Cours part. You can see that each individual Magny Cours die (or "node," from the perspective of NUMA topology) has two memory controllers and four HT 3.0 controllers.


The two memory controllers on each die connect directly to the pool of DDR3 that hangs off of each socket, which gives each socket its four total DDR3 lanes.

The way the HT link bandwidth is divided up in a two-socket config is a little non-obvious, but you can see what's going on from the diagram. The controllers split the link bandwidth for each die/node into three x16 links and two x8 links. One of the x8 and one of the x16 are then combined to make what's essentially an x24 link, which is used for directly connecting the two dies that are in the same package.

Another x16 link goes out to connect to the first die in the other socket, and the remaining x8 link connects diagonally to the second die in the other socket. The fourth remaining x16 link on one of the dies is not connected to anything, and on the other die it's used for I/O. The diagram at right attempts to illustrate how this works—it's not great, but if you stare at it for a minute it makes sense.

What's new about Direct Connect 2.0 (as opposed to Istanbul's 1.0 version) are the diagonal links, which let each node connect to two other nodes. Direct Connect 1.0 was missing the diagonal links, so if memory was in the wrong pool a node might have to make two hops to get it, instead of just one. Of course, the diagonal links are half the bandwidth of the regular links, but you can't have everything.

With so many cores per socket, congestion is still going to be a problem, despite the four HT 3.0 links per node. This being the case, AMD uses a technology called HT Assist to reduce inter-core traffic by cutting back on cache snoops among the sockets, so that helps mitigate some of the traffic congestion that could crop up with all of those cores and off-die links.

Despite the drawbacks of the MCM approach, Intel proved with its own dual-die products that the strategy works, especially if you're targeting cost and not just raw performance. MCMs are also great for when you want to pack a lot of compute power into a smaller, cheaper system, and you're willing to compromise a bit on the memory performance for certain kinds of latency-sensitive, write-intensive workloads. Specifically, Magny-Cours should make for a great HPC platform, because it offers a lot of plenty of hardware per socket, per dollar, and per watt, and that's just what you need to put together a cluster of machines that can grind through heavily data-parallel workloads.

Databases are probably a different story, especially when you compare Magny-Cours to Nehalem EX's buffer-enabled memory subsystem, which lets you cheaply cram loads of memory onto each socket. It's also the case that these types of workloads tend to have more coherency traffic, because different nodes may be accessing the same region of memory. In this case, the balance may tip in Intel's favor.

In all, though, the Magny-Cours launch is a huge one for AMD, and its platform-level innovations like Direct Connect 2.0, support for virtualized I/O, power efficiency, and relatively low cost should keep AMD in the server game. And staying in the server game has been AMD's number one survival priority in the past two years. I pointed out at the end of 2009 just how much other business AMD has thrown overboard as the company shrank back into its core x86 server and GPU businesses, and this new server platform reflects that single-minded focus. AMD's processors may not have the raw per-core performance that Intel's Nehalem boasts, but the company is doing a lot at the macroarchitecture and system architecture levels to narrow that gap.

Google fiber losers, unite! (And then build your own network)

Now that Google has wrapped up the application period for its open access, 1Gbps fiber testbed, we know that more than 1,000 US cities want the network. Only a couple will get it, though; what's going to happen to everyone else?

Broadband consultant Craig Settles and Greensboro, North Carolina fiber booster Jay Ovittore have joined forces to start "Communities United for Broadband." The idea is simple: create a place where communities can share strategies for moving forward with high-speed broadband plans—even if Google says no to their bid.

Pent-up demand

Enthusiasm about broadband has been running high, especially during the last 18 months. In 2009, President Obama's stimulus bill set aside billions for broadband. That money, now being dispersed, is already funding plenty of regional and middle-mile projects, and it encouraged communities to think more carefully about how broadband could be made better. Then came the National Broadband Plan, which has inspired broadband discussion over the last year and now promotes some major changes like providing Universal Service Fund money to broadband providers instead of phone companies.

But no local mayor jumped in a shark tank or changed its city's name to get federal broadband stimulus funds. It took Google's out-of-the-blue announcement earlier this year to really bring broadband excitement from governments down to the grassroots level. With so much enthusiasm generated by the project, and with cities having spent so much time to collect all sorts of useful data about their own communities, now would seem the perfect opportunity for people to take their broadband destiny into their own hands—with or without Google.

That's what Settles and Ottivore hope to do. They're starting with a Facebook group to coordinate broadband boosters around the country. Already, a few hundred people have signed on, among them many local administrators of the Google fiber bids.

Google's project really "brought into focus what the value of broadband is," Settles tells Ars. Even before the project has been built, Google's announcement has woken people up to what's possible; maximum speeds of 6Mbps from a local DSL provider simply aren't state-of-the-art. People want more, they know it's possible, but many see no way to get it from existing providers.

Everyone wanted a piece of the Google action because the company was ready to build the network itself, pledging to charge users quite reasonable rates for access. But if municipalities or regional governments are going to get into the fiber-building game, that's a different and much scarier proposition, with real tax money on the table.

Settles is no stranger to these issues, having just written a book called Fighting the Next Good Fight: Bringing true broadband to your community, but he says that "we're not in the pitchfork business" when it comes to dealing with incumbents. If existing companies will provide the services that local residents want, fine. If communities can use the recent wave of broadband excitement to encourage new entrants or nonprofits to deploy fiber, terrific. But if no one steps up to the plate, Settles encourages local governments to take the initiative themselves.

"I believe incumbents are unaware of, or unconcerned with, the depth of people’s dislike for their service provider as well as the lack of broadband competition," he said in the official announcement. "The fact that, spending almost no money and making no concrete offers, Google generated so many community and individual responses within just seven weeks clearly shows how much incumbents have failed the market. Our effort on Facebook gives communities one path to helping correct this failure."

The page is already stimulating discussion. When one participant asked about starting a "fiber co-op, akin to the farmer's co-ops, that allows cities to band together for bargaining purposes and equipment purchases," another person offered to talk, saying, "We are already down this road and would love to help others along in other communities."

As Settles noted in our conversation, there aren't many "best practices" in this area. What works? What doesn't? Certainly, as municipal involvement with free WiFi a few years back showed us, there are good ways and bad ways to get governments involved in infrastructure buildouts. Sharing information, pooling resources, and grouping buying power should help, though Google could certainly do its part by producing detailed best practices guides for ISPs based on its own experience building the 1Gbps testbed.

Last week, Google seemed to hint at just such a plan. "Wherever we decide to build," the company wrote, "we hope to learn lessons that will help improve Internet access everywhere. After all, you shouldn't have to jump into frozen lakes and shark tanks to get ultra high-speed broadband."