jump to navigation

Dual-core for mobile phones 2010-Jun-01 at 10:26 PDT

Posted by Scott Arbeit in Blog.
Tags: ,
add a comment

Intel, Qualcomm go dual-core for small devices, by Brooke Crothers, 31-May-2010

Intel said Tuesday at the Computex conference in Taiwan that it has begun producing dual-core Atom processors for Netbooks, a product first for Intel. New Intel technology will enable "very, very thin form factors with dual-core Atom," Matthew Parker, general manager of Intel’s Atom client division, said in a phone interview Friday. Parker said future Netbooks will get as thin as a half an inch (see photo).

Meanwhile, Qualcomm announced that it has begun sample shipments of its first dual-core Snapdragon silicon, targeted at high-end smartphones and Netbook-like devices called smartbooks. The single-core Snapdragon processor currently powers smartphones such as Google’s Nexus One and tablets such as the Dell Streak.

Mobile phones go vertical in speed now.  Today new laptops all have a minimum of two cores… four cores to come soon.  Mobile phones, starting in 2011, also will come with two cores… with more coming soon after.

Mobile phone 2015 = Laptop 2010 in terms of processing power.  But they’ll consume far less electricity and generate far less heat in delivering that computing power.

Advertisement

China challenges the US and the world with the second fastest supercomputer ever 2010-Jun-01 at 00:40 PDT

Posted by Scott Arbeit in Blog.
Tags: , , ,
add a comment

Chinese Supercomputer Is Ranked World’s Second-Fastest, Challenging U.S. Dominance, by John Markoff, 31-May-2010

The Dawning Nebulae, based at the National Supercomputing Center in Shenzhen, China, has achieved a sustained computing speed of 1.27 petaflops — the equivalent of one thousand trillion mathematical operations a second — in the latest semiannual ranking of the world’s fastest 500 computers.

And they have an even faster one coming in the fall.

But China appears intent on challenging American dominance. There had been some expectation that China would make an effort to complete a system based on Chinese-designed components in time for the June ranking. The Nebulae is based on chips from Intel and Nvidia.

The new system, which is based on a microprocessor that has been designed and manufactured in China, is now expected later this year. A number of supercomputing industry scientists and engineers said that it was possible that the new machine would claim the title of world’s fastest.

If you think this is fast… well, you probably already own a computer capable of tens of gigaflops right now.  Even at a modest rate of doubling of speed like every five years, you’ll own a computer this fast in less than fifteen years, and your mobile device/phone will be this powerful in less than twenty years.

Quantum dot amplifiers… that go up to 11 2010-May-28 at 17:21 PDT

Posted by Scott Arbeit in Blog.
Tags: , , ,
add a comment

Moore’s Law reaches its limit with quantum dot amplifier, by Dana Blankenhorn, 25-May-2010

A Russian-Japanese team has created a quantum dot amplifier, an “artificial atom” that can amplify an electronic signal, a central electronic function. The announcement follows by three years the same team’s creation of a quantum dot laser.

Quantum dots are often called artificial atoms because, while they are made up of multiple atoms, they can be treated in theory like single atoms, and their electron shells can be manipulated.

The ultimate goal of quantum dot researchers is the construction of a quantum computer — replicating all of a computer’s functions on a nano-level. But the dots have other uses as well. As I wrote here in January they can make nifty solar cells, too.

They don’t really go up to 11.  But they do start to address a fundamental limit to Moore’s Law: namely, that even though we’re making chips with smaller and smaller electronic pathways (Intel is manufacturing as small as 32nm right now), when we finally reach the size of individual atoms, we can’t go any smaller with current silicon-based technology.

Quantum computers take us off that path of miniaturization of existing technology, on to a whole new set of technologies that sidestep the problem, and carry us forward into a newer, far more powerful computing future.

And, like he said… this is good for the development of solar cells too.

Creating the computer models for a disaster 2010-May-27 at 12:18 PDT

Posted by Scott Arbeit in Blog.
Tags: , , ,
1 comment so far

I love this story.  Within a day, these people got time on a supercomputer and started working on creating 3-D models of what the oil spill might look like… especially if a hurricane comes through the Gulf, which is likely.

Researchers race to produce 3D models of BP oil spill, by Patrick Thibodeau, 26-May-2010

Scientists have embarked on a crash effort to use one the world’s largest supercomputers to create 3D models to simulate how BP’s massive Gulf of Mexico oil spill will affect coastal areas.

Acting within 24 hours of receiving a request from researchers, the National Science Foundation late last week made an emergency allocation of 1 million compute hours on a supercomputer at the Texas Advanced Computing Center at the University of Texas to study how the oil spreading from BP’s gusher will affect coastlines.

The goal is to produce models that can forecast how the oil may spread in environmentally sensitive areas by showing in detail what happens when oil interacts with marshes, vegetation and currents.

The amazing part of this, for me, is that we’re so close to being able to create amazingly complicated 3-D models – that even take fluid dynamics into account – that there are computer scientists today who think it’s important to do for this oil spill.

We’re not quite there – the massive computational power required to accurately model a spill like this won’t come for another few doublings of CPU, memory, storage, and the like – but we’ll be there soon.

"The hope — and I’m being optimistic — is that it would you give you a much more accurate forecast of a potential impact by geography and potentially by what kind of impact is going to occur," said Wells. The 2D models "haven’t done very well to date," he explained.

And the next time – and I hope there won’t be a next time – we’ll know exactly how to react, because we’ll be able to verify through computer modeling that our responses are optimal before we even start.  We’ll have “multiple options processed overnight” kind-of-power.  I know that doesn’t clean up the oil, but it sure helps in resource allocation.

A 48-core chip from Intel 2009-Dec-27 at 23:11 PST

Posted by Scott Arbeit in Blog.
Tags: , ,
1 comment so far

Today’s laptop and desktop computers come standard with dual-core architectures… in other words, they have two CPU’s on one chip.  Fortunately, Windows, Unix, and even Mac OS have been able to support multiprocessor machines for well over a decade, so in many ways we’ve smoothly taken advantage of this new processing power.

Now Intel has demonstrated an experimental 48-core CPU with all kinds of interesting new architecture features to allow for as much data as possible to flow through the chip.

While Intel will integrate key features in a new line of Core-branded chips early next year and introduce six- and eight-core processors later in 2010, this prototype contains 48 fully programmable Intel processing cores, the most ever on a single silicon chip. It also includes a high-speed on-chip network for sharing information along with newly invented power management techniques that allow all 48 cores to operate extremely energy efficiently at as little as 25 watts, or at 125 watts when running at maximum performance (about as much as today’s Intel processors and just two standard household light bulbs).

Intel plans to gain a better understanding of how to schedule and coordinate the many cores of this experimental chip for its future mainstream chips. For example, future laptops with processing capability of this magnitude could have “vision” in the same way a human can see objects and motion as it happens and with high accuracy.

Moore’s Law suggests the doubling of computing capacity (in not so many words) every 18 months.  That means that in ten years we’ll have nearly seven doublings… or an increase in computing capacity of 128 times for the same price.  If we’re on two-core machines now (with 4GB RAM), and we’re about to get mainstream four-core CPU’s (a doubling) then by 2020 we’ll have computers on our desks with 256 cores (and 1TB RAM) — in other words, low-end supercomputers by today’s standards — for the same $1,000 we spend today to get two cores.

This announcement from Intel is the down payment on this vision… a mainstream 48-core system is around 4 doublings of capacity away, or about six years, and a high-end, higher-cost version of it will come a couple of years sooner.  When you think about chips like this not just on your laptop, but also in your mobile phone, the possibilities start to get very interesting.  When you think about chips like this filling servers in massive cloud-computing data centers, things get very exciting, don’t they?