View unanswered posts | View active topics
It is currently Wed May 21, 2025 9:33 am
|
Page 1 of 1
|
[ 11 posts ] |
|
Author |
Message |
pcernie
Legend
Joined: Sun Apr 26, 2009 12:30 pm Posts: 45931 Location: Belfast
|

 |  |  |  | Quote: Nvidia VP Bill Dally has claimed that "Moore's law is dead" and declares that ground-up parallel processing is the future of computing. Moore's law describes that trend for computing performance – processing speed, memory capacity and the like - to double approximately every two years, named after Intel co-founder Gordon E. Moore, who first outlined his theory back in 1965. Moore's law is over? However, writing a guest column for Forbes magazine this month, Nvidia man VP Bill Dally thinks that "Moore's law is dead." Dally says that dual, quad- and hex-core solutions are becoming increasingly inefficient and likens multi-core chips to "trying to build an airplane by putting wings on a train." Instead, Nvidia feels that its own approach – ground-up parallel solutions made for increased energy efficiency – is the answer. Energy efficiency is key "Going forward, the critical need is to build energy-efficient parallel computers, sometimes called throughput computers, in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem," Dally writes. "A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance. Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance–at a tremendous expense in energy." Nvidia's own CUDA architecture is already found in GeForce, ION, Quadro and Tesla GPUs. "The path toward parallel computing will not be easy. After 40 years of serial programming, there is enormous resistance to change, since it requires a break with longstanding practices. Converting the enormous volume of existing serial programs to run in parallel is a formidable task, and one that is made even more difficult by the scarcity of programmers trained in parallel programming," the Nvidia man continues. "The good news is that there is a way out of this crisis. Parallel computing can resurrect Moore's Law and provide a platform for future economic growth and commercial innovation. The challenge is for the computing industry to drop practices that have been in use for decades and adapt to this new platform." Read more: http://www.techradar.com/news/computing ... z0mrkvWkIw |  |  |  |  |
I'm curious to know if anyone thinks what he's saying is feasible/likely - I always suspect these sort of things would take at least a decade just to get a foothold Here's a handy plain English guide from Wiki: http://en.wikipedia.org/wiki/Parallel_processingThoughts?
_________________Plain English advice on everything money, purchase and service related:
http://www.moneysavingexpert.com/
|
Mon May 03, 2010 11:50 am |
|
 |
Amnesia10
Legend
Joined: Fri Apr 24, 2009 2:02 am Posts: 29240 Location: Guantanamo Bay (thanks bobbdobbs)
|
Parallel computing is very hard to program in. I had a friend who was a contract programmer in a number of languages and he used to program in Occam and he said that it was the worst language to program in because you have to keep track of too many processes.
_________________Do concentrate, 007... "You are gifted. Mine is bordering on seven seconds." https://www.dropbox.com/referrals/NTg5MzczNTkhttp://astore.amazon.co.uk/wwwx404couk-21
|
Mon May 03, 2010 12:09 pm |
|
 |
pcernie
Legend
Joined: Sun Apr 26, 2009 12:30 pm Posts: 45931 Location: Belfast
|
I'd guess the human factor is deadly in any sort of programming 
_________________Plain English advice on everything money, purchase and service related:
http://www.moneysavingexpert.com/
|
Mon May 03, 2010 12:12 pm |
|
 |
forquare1
I haven't seen my friends in so long
Joined: Thu Apr 23, 2009 6:36 pm Posts: 5150 Location: /dev/tty0
|
It's the kind of thing I could see Apple doing one nay. Creating a parallelised version of the OS and forcing all of their developers to port their apps to the new architecture. Apple would probably create some method to run existing applications with reduced performance and capabilities...
People I know who have done multi-threaded programming hate it because it is so challenging on the mind.
|
Mon May 03, 2010 12:22 pm |
|
 |
forquare1
I haven't seen my friends in so long
Joined: Thu Apr 23, 2009 6:36 pm Posts: 5150 Location: /dev/tty0
|
Parallel programming may be well suited for model driven development, where the "programmer" creates models of what they want the program to do and then feed these models into an application which creates code. These applications could create parallel applications without as much need for humans to conceive the parallelism of the application.
|
Mon May 03, 2010 12:25 pm |
|
 |
big_D
What's a life?
Joined: Thu Apr 23, 2009 8:25 pm Posts: 10691 Location: Bramsche
|

Energy efficiency is the key? Maybe nVidia should try applying that to their own hardware. Their current generation of high end chips use nearly twice as much power for the same performance, compared to the high-end ATi chips - they are also considerably more expensive...
Edit: As to parallel programming, that was the aim of the Transputer back in the 80s, but it failed (Occam was created specifically to program the Transputer chips).
Another problem is, a lot of tasks can't be re-written to run in parallel. Especially programs that tend to sit around waiting for user input. A text editor can't really do much with another 200 cores, if it is simply waiting for the user to press a key.
A lot of programs that can use multiple cores are already using them (look at Cinebench, that is a good example a multi-core capable program. The question is, where the scalability drops off.
nVidia could certainly bring some supercomputer like parallel processing power to average users, but the question is, how many average users will really want to run large weather prediction models or genome mapping?
It makes sense for folders (CETI etc.) and certain games, although the current ATi and nVidia card plus PhysX do most of what is needed, the games themselves are still pretty linear - although you could probably give some of the AIs their own processors... But for the average user, who does nothing more than email and a bit of web surfing and maybe watch a video, it isn't going to bring them any advantage.
When we start building computers into the core of a house and everything is computer controlled, we might make use of some of that power.
Likewise, for corporates, it might make some sense, putting in a couple of parallel processing servers and giving the users dumb winterminals. All the processing power is available for all users as and when they need it. Although this is something we do with current server technology and it just needs multiple processors and lots of RAM, it doesn't specifically "need" nVidia's brand of parallel processing, over what Intel currently give us - although it might allow for greater scalability at cheaper prices than the current Intel solutions.
_________________ "Do you know what this is? Hmm? No, I can see you do not. You have that vacant look in your eyes, which says hold my head to your ear, you will hear the sea!" - Londo Molari
Executive Producer No Agenda Show 246
Last edited by big_D on Mon May 03, 2010 1:07 pm, edited 1 time in total.
|
Mon May 03, 2010 12:55 pm |
|
 |
Amnesia10
Legend
Joined: Fri Apr 24, 2009 2:02 am Posts: 29240 Location: Guantanamo Bay (thanks bobbdobbs)
|
I would imagine that any such future OS that is fully parallel will be such that the app programmers do not have to learn multi processor languages. I expect that there will be compilers that convert the program into a form that the OS can use.
_________________Do concentrate, 007... "You are gifted. Mine is bordering on seven seconds." https://www.dropbox.com/referrals/NTg5MzczNTkhttp://astore.amazon.co.uk/wwwx404couk-21
|
Mon May 03, 2010 1:03 pm |
|
 |
big_D
What's a life?
Joined: Thu Apr 23, 2009 8:25 pm Posts: 10691 Location: Bramsche
|

There are parallel processing extensions for the C language already. There need to be, because Intel and AMD server already go up to several hundred thousand cores on high end servers. But it is a very specialised field. Most programmers can't get their heads around splitting their tasks into hundreds of thousands of bits - and as I said above, very few "general" computing tasks can make much use of more than a couple of cores, because there isn't that much of the process that can be run in parallel. For non-supercomputing tasks, I think it is going to be more for running lots of different tasks at the same time, or supporting hundreds of users at the same time (web servers could benefit, but they usually run out of bandwidth, before they run out of processing power. And for such tasks, Windows and *NIX are already in a position to take care of hundreds or thousands of cores - the Data Center and Supercomputer versions of Windows scale up, Windows Server will run to thousands of "compute nodes" with 8 processors each (currently 92 logical processors per compute node) and that is with Intel Xeon or AMD Opteron chips. nVidia's advantage will be bringing hundreds of logical processors into a single compute node (computer). The question is, how much memory will the individual processes need, will their motherboards be able to accept the hundreds of GB of RAM required to process the data that fast? And they will need something a lot faster than SAS, let alone SATA 6G to retrieve and store data. They will also probably need less power than an Intel equivalent, because they can fit hundreds of cores on a die, so you won't need as many physical boxes, each with their own PSU. The real question is, how many nVidia cores do you need to equal the processing power of a current 8 hexacore Intel based HPC compute node? The nVidia chips are a lot simpler than the Intel ones and they are designed to perform a limited number of (complex) calculations, but they are less general purpose. Edit: Excel 2010 also has the ability to delegate processing of UDFs (user defined functions - functions written by the user or a third party) asynchronously on a HPC compute cluster. http://blogs.msdn.com/excel/archive/201 ... uster.aspx
_________________ "Do you know what this is? Hmm? No, I can see you do not. You have that vacant look in your eyes, which says hold my head to your ear, you will hear the sea!" - Londo Molari
Executive Producer No Agenda Show 246
Last edited by big_D on Mon May 03, 2010 2:01 pm, edited 1 time in total.
|
Mon May 03, 2010 1:32 pm |
|
 |
big_D
What's a life?
Joined: Thu Apr 23, 2009 8:25 pm Posts: 10691 Location: Bramsche
|
Ignore, hit quote instead of edit... 
_________________ "Do you know what this is? Hmm? No, I can see you do not. You have that vacant look in your eyes, which says hold my head to your ear, you will hear the sea!" - Londo Molari
Executive Producer No Agenda Show 246
Last edited by big_D on Mon May 03, 2010 2:01 pm, edited 1 time in total.
|
Mon May 03, 2010 1:37 pm |
|
 |
AlunD
Site Admin
Joined: Fri Apr 24, 2009 6:12 am Posts: 7011 Location: Wiltshire
|
You aint wrong 
_________________ <input type="pickmeup" name="coffee" value="espresso" />
|
Mon May 03, 2010 1:40 pm |
|
 |
l3v1ck
What's a life?
Joined: Fri Apr 24, 2009 10:21 am Posts: 12700 Location: The Right Side of the Pennines (metaphorically & geographically)
|
Yet another misquote of Moore's law. It has nothing to do with the speed or if they're used in a parallel way or not.
|
Mon May 03, 2010 1:48 pm |
|
|
|
Page 1 of 1
|
[ 11 posts ] |
|
Who is online |
Users browsing this forum: No registered users and 20 guests |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum
|
|