Showing posts with label processors. Show all posts
Showing posts with label processors. Show all posts

Saturday, December 28, 2013

Experiences with BOINC for Android

Let me preface this with, for the most part I am happy with BOINC for android devices, having used on my nexus 7 ( old not the new one released in 2013) for half a year or so now.  Doing this really makes me happy I abandoned my BOINC on raspberry pi quest, as on the nexus with just as powerful if not more so processor as the raspberry pi.  So to explain that last comment a brief discussion on a noticeable technological difference between ARM and the ubiquitous personal computing x86 CPUs.

ARM processors are loved because they use far less energy than x86, not to mention are more cost effective to make.  This though comes at a terrible cost.  From a mathematical perspective, if the integers were the only numbers we have, ARM processors would be the leading processor technology in every device.  This is because ARM processors are just about as good as x86 processors when dealing with integers. The trouble comes from decimals/ floating point numbers. ARM processors are horribly bad in comparison to x86 processors when dealing with those numbers in calculations.

Now I do not claim to be any expert on the calculations done on any BOINC project, but I can not recall the last time I have done any science problem and not had to use decimals.  As such these calculations take longer on ARM processors than x86 even with somewhat comparable speeds quoted.

That being said as someone who loves being able to use his tablet exactly when it is needed, I often have it charging when it is not and use, and fairly often when it is in use as well. So it can work freely doing tasks most of the day/ week / month /year....

For those getting started with BOINC this can greatly propel them forward in points, and make them feel like they are making great progress. But a downside to being a serious cruncher is that my main machine could bring in more cobblestones in a single day than the nexus 7 could do in an entire year of nonstop running.   Not to mention this still touches upon my last post, I am not convinced that there are the projects, and especially the willpower to code, to bring in all these additional devices, with limited individual ability, but massive collective ability.

Sunday, January 20, 2013

Building a Computer Part 2: The Build!

I am not sure what to believe any more, the more you watch videos on building a computer the more and more the emphasize what goes wrong. ( New Egg has a nice 3 part series, part 2 features the actual building. ) These issues, seem like they can be so accidental, and not actually something you consciously do.  Static electricity being the biggest one.  Keeping this in mind probably pointed to my biggest issue in this build, sometimes force is necessary.

DSC02875

Part one was the test build before putting everything in the case.  the biggest thing to do in this portion is get the CPU into place.  The Intel i5 3350P uses a socket which they bill as "zero insertion force"  which let me just say can be quite misleading.   Let me just say while you just lightly drop/ place the CPU into the socket, of course being very careful to not touch the processor itself.  To close "cage/bracket" which locks the CPU into place it both sounds and feels like you are breaking something, as it honestly requires quite a bit of force.  Then it is simply putting on the heatsink/ fan, I decided to go with the stock fan included ( while I am a cruncher I am not into overclocking, so this should be sufficient).  Hook up the ram and the GPU, plug in the few required cords for the PSU, and do a test boot, hope you see the splash of the BIOS on the screen, and celebrate.

DSC02876


This is where the first issues of this build started to plague me,  I was not getting sustained power, so I was very scared something was shorting out the entire system.  Careful examination of the cords, caused me to realize the cord was not completely plugged into the PSU.  This was not the end of the issues, I was not getting anything on screen even though I did have sustained power this time.  Turns out more cords were not plugged in all the way, this time being the HDMI cord from the GPU to the screen.


Then comes the annoying part, putting everything into the case.  Let me just say your motherboard users guide is so important for knowing what cords go where.

DSC02877

Everything is in except the GPU and the cords are not a complete mess.  Lets see it with the GPU in, a double form factor though passively cooled card Nvidia GT 640.

DSC02881

Alright lets boot up!  Uh Oh, nothing is showing up on screen.  The cord is completely plugged into the GPU, but the not enough force plagued me once more, turns out I did not completely seat the GPU, it needed that extra force, which I was scared to give because the MB bends slightly while inserting it.

Well now it is crunching away, though still in tweak mode.

Friday, January 18, 2013

Building a Computer Part 1 Finding Parts

In the past week, I sourced the parts for a new Crunching machine, had the parts arrive, assembled them and set up the entire computer.  This is going to be a post detailing my thought processes behind finding the parts.

Step 1: Find your favorite computer parts supplier.

I went with new egg because they have worked wonderfully for me in the past, and I figured "if it ain't broke don't fix it!"  Normally every time I have dreamed of building a computer, and even assembled shopping carts of what I would like to buy, it was really just that a dream machine.  Top of the line everything, including more than what I spent on this computer in Video cards alone, and nearly as much in terms of processors.

Step 2: Decide are you building a Dream machine, or a Realistic affordable machine?

This time I put together a cart, I said, I am going to do to things, look for nice parts that I like, but not top of the line parts, and if possible I am going to go for sale items that have high ratings for user reviews. As I already had the Video card I was going to use in this machine, lets just say it involves a lot of stupidity on my part, and buying a part without actually checking to see if it will fit in its intended machine, all I had to do was get a few key parts, and I even realized I could save money on an optical drive, as for the time being I do not intend on inserting any CD, DVD, etc. into this computer any time soon ( I may purchase a Blu-ray drive for this later).

So as I was going for affordable, and did not need an optical drive, and had a GPU, I needed a Processor, Memory, Mother Board, CPU, Hard Drive Disk, and Power Supply.   Honestly that is it.  ( I should note I also already had the cable to hook the computer up to my TV for graphics.)

Proceeding by my intentions,  I am a major Intel fan, but instead of going for the i7, I decided go for something about half the price, but still incredibly powerful, with an i5 quad core processor.   Then the rest was more down to personal preference, of course making sure each of these parts was compatible.  That being said prior to any mail in rebates I had saved 50 dollars on this computer, for a total price of just under 600 dollars, nearly 700 dollars if you factor in the GPU I already had.

Now I make no claims that 700 dollars is a small amount of money, but I did scope similar systems from large computer companies that make hundreds of identical computers, and I even looked at companies that custom build PCs to specs.  In the first case through large companies like Dell, and HP, a similar system would cost at least $1,000.  While the custom build to design were somewhat close ( of course you realize you pay them for putting it together for you, but at least they take responsibility for that), and for having them take on those burdens, you pay them an extra $100-$200 dollars.

Stay tuned for Part II featuring the actual build.  ( I took pictures, I hope some of them are good enough to post!)


  

Wednesday, January 9, 2013

World Community Grid Badge System

Based on a post on the World Community Grid statistics system, I have decided to write this post hopefully starting a discussion on the pros and cons of each system.  World Community Grid uses total run time dedicated to each project instead of relying on the point system that basically every other badge awarding project uses ( though there seems to be no firm set standard for what gets a point/ credit between projects).

While World Community Grid attracts a different type of cruncher rather than the point based crunchers of several other well known projects. In theory total run time helps put everyone on an even playing field, where it really just measures how many hours each core has been logged on and crunching.  So the amount of money people pour into their machines, or spend on building farms for crunching, does not make as large of a difference on other projects.  I.e someone with a 10 year old machine, can not lag far behind someone with a 3 year old machine, even though the 3 year old machine has a far faster and more efficient CPU.

My issue, is that World Community Grid minimally takes into account how much work someone is actually doing towards their projects.  I have a GPU that has crunched quite a few of World Community Grids Help Conquer Cancer's GPU tasks, which from what I have read is roughly the equivalent of 2 of the CPU tasks for Help Conquer Cancer.  A HCC CPU task takes about 2.5 hours on average based on the statistics on the World Community grid site, while the run time awarded to an HCC GPU task is the total run time of that task, so for me it would be about 30-40 minutes.  This seems very odd as for the same amount of work towards the project, one person can be awarded 5 hours, and another person can be awarded half an hour towards a badge.

I see both sides of this argument, in GPU Grid people basically buy their way to the best badges, and into places of high recognition, though having several of the top of the line graphics cards.  So I understand why World Community Grid wants to try and avoid that, but at the same time I feel they fail to properly acknowledge someone actual contribution to a project in terms of work done.

Thoughts?

Saturday, November 10, 2012

Define Device Profiles

I have recently realised the full importance of defining device profiles for each device you run.  Especially when the projects themselves allow you to set the specs with the project which over rides anything you have set with boinc on your machine.  Though the only project I have really ever see do that is World Community Grid.

I honestly do not remember if I set the default profile for World Community grid, other then specifying the projects I wished to crunch, or if it read the boinc settings on my main machine and used that.  Either way I have 3 different computers with 3 very different concerns, and it wasn't until looking at ways to consider optimizing them, that I realized the issues I had.

First my oldest computer, a laptop that has been on its last legs for 3 years now ( the energizer laptop).  After reinstating it as a WCG only machine with reduced settings, a few months after I posted I was retiring it, I watched after it downloaded a Clean Energy project that should have only taken 14 hours to complete, but things got fishy after it had been running for 20 or so hours one week.  I realized due to it being shut down at night, and stopped and started for other reasons, that it was a computer like this that they post the notice about system requirements about for the Clean Energy Project.  With checkpoints few and far between ( not sure there really are any), even leaving it running for 4 or so hours at a time might not result in a check point, causing all such data to be lost upon shutdown.  So that was my first test ride.

Secondly the Work horse the tower that has next to no other demands on its processor and gpu 90% of the time, and left to crunch away happily, combined with the fact that it is a tower, it remains quite cool.  This one was suffering far to much being restricted to the level of the others, and upon watching it for some time yesterday and today, I realized somehow the settings were rewritten according to the profile that was my Default for WCG.  Quickly solved by a  new device profile, set to Maximum power.  (Lets see how this works in a few days).

Lastly the machine with so many heat issues its not even funny, my main laptop, so this one I need to constantly toy with, and while it has undeniably the most powerful CPU of the three, the heat issues lead me to throttle its crunching power, just so I don't fry all sorts of components.

So far my experience with these device profiles is rather nice.  I am not sure if WCG limits you to four profiles ( Default, Home, Work, School), but should I need to I will look into finding a way to name your own profiles, and set your own settings. 

Happy crunching and keep on processing away!

Wednesday, October 10, 2012

Milkyway@home on Raspberry Pi (FAILED)

Tuesday Oct 9th:

Found out yesterday that there are a few other projects that offer files which you yourself can compile to customize it for your own system.  These even work on ARM processors ( the big hurdle towards crunching boinc projects on the Raspberry Pi). One of which is Milky way @ home, so I got to work on it last night, only to realize somehow the whole boinc client and manager pair was FUBAR.  After trouble shooting, googling, and searching even more, I realized that the authorization configuration file was to blame.

Through trying to get things to work, somehow I wound up with copies of the authorization file all over the place on my Pi.  And the one everyone said was important oddly seemed to be blank.  ( Hard to have a password in a file, when there is nothing in the file!)  So I did a complete wipe of everything boinc related on my Pi, then reinstalled the client and manager.   This time it was able to connect to the client host, so success on that front.

Then I attached to Milkyway@home, and got the usual message that my arm-linux-unknown type CPU was not supported.  I was not shocked, I had this next trick up my sleeve.

sudo apt-get install boinc-app-milkyway

Running that in the terminal and installing the source code for the milkyway project. I went back to the GUI manager, and clicked update on Milkyway@home.  Going to the event log one of my best friends when trouble shooting BOINC, I see a red message and my heart sinks.  But I read the message, and got very excited all of sudden, it was not about an unknown and unsupported processor.  It was that I had a lack of disk space to download a task for the project.  Changed settings, and I still was slightly short, so I cleared out a few Raspberry Pi programs I have no intent in using for awhile, the big one that cleared quite a bit of space was "Scratch" sorry MIT but if I am going to code I am going to relearn a far more substantial programming language such as C++, Java, or Python.

Hit the update button again, hopped over to the event log and saw this glorious message :  "got 1 new tasks."  While it gives an ETA of 36 hours or so, not shocking due to the hardware on the Raspberry Pi.
  
The Bad:

Things seemed good I watched it crunch for 10-20 minutes with no issues.  Switched off the screen and went to work on other items.  Checked back a few hours later, and it said that it had only crunched for "1 hour"  thinking that's not right, I checked if the raspberry pi was going to sleep or some other sort of standby, found no such sign.  But in the event log there was a troubling set of notices.  It seemed for nearly 10 minutes, every 30 seconds or so the task would get reset.  Then after that 10 minutes the task would abort as failed.

While I can not know this for certain, it is my belief that why the project curnches fine for roughly 1 hour and 40 minutes, is I believe that is when the project reaches its first check point.  When it hits that check point, it runs into a bunch of errors, and freaks out so to speak.

So while I had a lot of hope, they were dashed as while I can get projects and have them start crunching with Milkyway @ home, I have not been able to get one to complete.  ( I ran it for a half a day, and each task it got ran into the same issues).

Sunday, August 5, 2012

Dream Machines

One of the few magazines I subscribe to is MaximumPC, and I got around to reading part of the latest issue, which is this Dream Machine issue.  I could not help but imagine how incredible such a machine would be or crunching BOINC projects, though at about 14-15 thousand USD, it is definitely completely out of my price range.   It used an 8 core Xeon chip, which some googling indicates that it should have Intel's hyper-threading technology, meaning it can process 16 tasks at once.  In addition to that it has two double core GPU's making it a quad SLI set up.

The GPU configuration raises quite a few issues in terms of crunching.  Some googling seems to indicate that the later versions of BOINC disable SLI while in use, so each GPU should run independently.  Though I am not sure if this is true for the somewhat recent Dual GPU video cards.  Even with all my reading on those, I am still not sure if both of the GPUs on one card, could in fact operate completely independently of each other.  Either way I am sure as they are top of the line video cards ( though definitely aimed more towards gaming use) they can crunch away incredibly effectively to no end.

While for any intents and purposes I would want to use for the system, nearly all of it would be excessive, except or maybe half the RAM and the liquid cooling.  Though I have always been incredibly skeptical of liquid cooling, while I know it cools far more effectively, I am just incredibly worried about a leak and basically ruining the entire system.  But the items such as 12 TB or Hard drives, not counting the storage in the form of SSD's.

While I have started, and am working on building up what I have dubbed my "Technology Play Fund" it had been completely wiped out when getting all the necessary items for the Optiplex 745.  Though one push in this Dream Machine issue is the fact that technology is advancing so fast, and a fixed performance level is always becoming less and less expensive, that its not uncommon for some key components and high priced performance pieces to only 4 or 5 years later to not just be affordable, but practically standard.  So in 5 years will a good number of BOINC crunchers have similar machines?

While I do not quite have the technology background to do this, seeing how effective a video card can be at computing/ crunching these BOINC projects, my Ideal computer may be a cluster type system in which each  board has one or two graphics cards.  Better yet is this type of system could be modular allowing it to be expanded by adding new boards one by one, to increase its computing power.

What is your Dream Machine?

Tuesday, July 17, 2012

Which projects should you crunch? Standard office Desktop

If the Desktop is over 5 years old you may want to refer to my previous post, but some parts of this post might be useful.  Also by Standard office desktop, I do not mean a gaming rig, i.e. something that likely contains one if not more high quality GPU's, that has added features up the wazoo, and potentially has certain items overclocked.  I mean something you'd likely find in your office ( if your office still has desktops).  Most of these depending on how recent contain CPU's that are still fairly recent, while they may not be the latest bazillion core monstrosity put out by AMD or Intel, they often preform well enough for what they are.

Now just so I am not getting anyone in trouble, if you do not own the computer yourself, i.e. if it actually is a company computer ( and you are not the owner of the company), you probably shouldn't be loading software, or adding hardware to the computer no matter how much you know what you are doing.   But if you do own the computer, and if you view yourself as very very serious about crunching BOINC projects, and you are somewhat tech savvy, may I suggest picking up a decent GPU to add to your system ( making sure you stay within your systems PSU wattage amount).  No I am not suggesting you go and purchase one of the $2k graphics cards which find their way into high powered computing clusters.  See my previous posts about GPUs you can still get cards that are incredible work horses for under 100 dollars, while the card I got has since had several special offers on it expire, so its now closer to 100 than when I got it, it's still reasonably priced based on the price of many GPUs.

System with GPU that is able to crunch projects.

Prime Grid  and Einstein - These now become a lot easier to crunch, as now Desktop computers tend to handle heat much better than laptops, and when they choose to run on the GPU the project sizes are often incredibly manageable.

GPU Grid - This is possibly my new favorite project, as it is very science and health care oriented in its goals, but as its name suggests it only runs on GPUs.

For the CPU.

I'm using the same basic write up, just trying to mark Changes

Prime Grid - If you really want to run it go ahead, this tend to cause your machine to run extra hot ( hotter than usual for boinc tasks it seems), but desktops tend to handle heat far better so it is less of an issue. Especially lately, on every machine, it tends to give bloated project sizes, with somewhat short deadlines for completion. While they do not enforce the deadlines strictly, it often causes the projects to be given High Priority status.

World Community Grid - Great choice especially if you like to support humanitarian projects. Often gives decent length time lines, and they do a good job of matching jobs to systems, with older systems usually being given less intense projects to crunch, often with shorter completion times allowing even these older system to crank out projects at a decent rate.

eOn - Not the most intriguing of projects from descriptions, but I have found on these older systems, it is my go to project. Less so now that its summer, and I brought the computer home from my office to have it crunch 24-7. I like this one for a machine that is sporadically turned on ( being on a campus with different schedules each day, and somewhat often not needing to turn on the computer for great lengths of time during the day) as especially on these older systems they seem to give out 10 minute projects. While they do not generate a lot of credits, its not a huge loss if you turn off the machine and don't turn it on for a few days and the deadline passes. But for a machine like a desktop, often left on more often, and usually being able to better handle almost all tasks, this might be overly simplistic, especially if your CPU has more than 2 cores.

Yoyo@ home - This one seems to be developing a lot of the same problems as Prime Grid.

Einstein@ home - Behaves similar to World Community Grid, but a great choice if you are fascinated with Outer Space, and the Solar System.

Climate Prediction - I almost can't believe I am recommending this project for this, because it gives incredibly huge projects. But what took me a while to realize, is the deadlines for these projects are usually a year away, and even a sparsely used system should be able to crunch 500 hours in a year. Also I need to double check this, but it seems to periodically report in and grant credits, so if you like watching the credit numbers rise, its not like you are putting off bringing in credits for a month or so.

Saturday, July 14, 2012

Which projects should you crunch? 5+ year old System

While we all have our own interests, and as such different projects strike each of us as worthwhile, I will give my thoughts on a few Boinc Projects, and their suitability for a given system. As I cut down and refined the projects I am running, I have only run a hand full of them across most of my machines. As such the projects I will focus on in this post are: Prime Grid, World Community Grid, eOn, Yoyo@ home, Climate Prediction and Einstein@ home.

Prime Grid - Avoid at almost all cost, not only does this tend to cause your machine to run extra hot ( hotter than usual for boinc tasks it seems), but especially lately, on no matter the machine, it tends to give bloated project sizes, with somewhat short deadlines for completion. While they do not enforce the deadlines strictly, it often causes the projects to be given High Priority status.

World Community Grid - Great choice especially if you like to support humanitarian projects. Often gives decent length time lines, and they do a good job of matching jobs to systems, with older systems usually being given less intense projects to crunch, often with shorter completion times allowing even these older system to crank out projects at a decent rate.

eOn - Not the most intriguing of projects from descriptions, but I have found on these older systems, it is my go to project. Less so now that its summer, and I brought the computer home from my office to have it crunch 24-7. I like this one for a machine that is sporadically turned on ( being on a campus with different schedules each day, and somewhat often not needing to turn on the computer for great lengths of time during the day) as especially on these older systems they seem to give out 10 minute projects. While they do not generate a lot of credits, its not a huge loss if you turn off the machine and don't turn it on for a few days and the deadline passes.

Yoyo@ home - This one seems to be developing a lot of the same problems as Prime Grid.

Einstein@ home - Behaves similar to World Community Grid, but a great choice if you are fascinated with Outer Space, and the Solar System.

Climate Prediction - I almost can't believe I am recommending this project for this, because it gives incredibly huge projects. But what took me a while to realize, is the deadlines for these projects are usually a year away, and even a sparsely used system should be able to crunch 500 hours in a year. Also I need to double check this, but it seems to periodically report in and grant credits, so if you like watching the credit numbers rise, its not like you are putting off bringing in credits for a month or so.

Tuesday, July 3, 2012

I Created a Monster

The very first project given to my GPU was a GPU grid long run project, which it says top notch cards could crunch in 8-12 hours. Well I knew for certain my card is not "top notch" especially with the price I paid. But after a little more than 39 hours of crunching it completed a project whose BOINC credits amounted to about 5 months of credits with the typical running of m other two machines. The photo above shows the day that project was completed, with basically every day before that dwarfed almost to almost nonexistence on that chart.

Another fun chart is shown below, it is my world position on Boinc based on Total credit, while I was slowly making progress, the day that project validated and posted, BAM! my world position jumped off a cliff! While I want to see if I can find a Linux tool which will let me benchmark my GPU, it seems the 144 CUDA cores and 1.5GB of RAM are kicking @$$ and taking names, even if it does not have the quickest clock speed of GPU's out there in a similar price range.

Friday, June 29, 2012

Optiplex 745 Signing on

In my very first blog post I mentioned that I was expanding my "farm" and the first machine I got almost solely for the purpose of crunching number ( will likely be used as a back up machine too), is an Optiplex 745 which in among itself is not the highest powered tower available. But as my second post indicated I got a CUDA enabled GPU with 144 CUDA cores, a nvidia GT 440, which at 80 dollars seems quite resonable, especially for a GPU that can still be used on a Machine that has only a 305 Watt PSU. As I also needed to buy a hard drive for the machine which otherwise I got for practically free due to some wonderful friends in IT, I needed to equip this machine with a brand new OS, what better than Linux? Swayed in my linux choice from my long time tech mentor and college roommate for 3 of 4 years of undergrad, I of course went with Ubuntu, and booted it from a flash drive.

The installation process, went easy. Way too easy, until I thought I got everything up and running, and BOINC could not recognize my graphics card. I was about to be PO'd because everything said this card should work, and I did not spend 80 dollars just so I could use HDMI cables rather than a Dell specific DVI format. Well a little bit of Google foo, lead me to a page that was incredibly helpful, but somehow I can not find at all when searching from my Windows PC. I swore it was under the Ubuntu help forums.

Well, let me just say getting CUDA up and running took my otherwise pain free set up and start up process, and turned it into a major pain. I have no problem working with a terminal, but so much of everything I had form the page to set it up was type these two commands, then wait 5 minutes, type two more commands, and wait another 5 minutes, until eventually you get to the compiling process of the entire SDK, which I almost wish I timed it, well lets just say for anyone that goes through this process, when you set it to compile, you might as well find something else to do for awhile, and just make sure you can look at it every 10 or so minutes to see if it is done. Sadly though the first and second time I tried to go through and get everything working, it failed. When I went to test to make sure the install worked properly I got an error. Thankfully the third time is the charm, and it is now happily crunching away.

While I personally love linux, and while it is becoming more and more user friendly, so much so, that I think the average user could get around and navigate just as well on a linux PC as they can on a Windows or a Mac ( after the brief adjustment phase), it sadly is still true that you can not really do any sort of heavy lifting in Linux without a little bit of knowledge in using the Terminal/ Command Line. Whereas Windows and Mac have gotten to the point that most people with those machines do not even know those things exist on their machines.

Tuesday, June 26, 2012

Why you should care about CUDA

I by no means started crunching projects expecting to get any sort of reward out of it other than knowing that I helped progress science. But lately I have found I have gravitated towards projects that offer some sort of reward, even if it is a measly graphic "badge" recognizing some level of accomplishment in the crunching. I do not know why but its nice to feel like you are getting recognized even with relatively little effort from the people running the project. A step up, is GPU grid offers two types of badges, level badges, which are based solely on credit towards the project, and publication badges, something that appeals to the academic in me, as these badges link to a specific scientific publication, and let you know how much you helped contribute to the science in that publication.

Possibly the most substantial reward I have yet to come across directly to the owners of the machines crunching the projects is done by Einstein at home. If you are the discoverer of a new pulsar you get rewarded with a nice framed certificate. For example a picture they shared from the last batch they sent out is here.

So I actually encourage projects to find some way you can quantify and reward their participants, even if it is in some completely small way. Also in my personal opinion the more you can tie it to success in a project the better. Currently the only badge I have is one for Harmonious Trees under Yoyo@home which appeals to the graph theorist in me, and I actually know some recent Ph'D students whose dissertation is in a closely related area of graph theory. Instead of just giving me a badge based on the number of credits crunched, it would be nice if they could "estimate" how many trees I found a Harmonious coloring for. Granted based on the projects this can get quite hard as I understand for some topics it may split that one task into many many projects to send out to the participants.

Monday, June 25, 2012

Expanding the Farm

If you want to use GPU's to crunch projects, and are a fan of NVIDIA graphics cards, then CUDA is essential, and might as well be your best friend for your crunching projects. CUDA is a parallel computing architecture which makes programs written in higher level languages (C and above, but mostly C) to be able to be executed on a GPU easier than it ever has been before. If you have a CUDA enabled GPU there is one more important bit of information which you should look for. This is the number of CUDA cores. As I do not yet have a GPU crunching projects yet, I am a bit unsure if CUDA cores behave similar to CPU cores for crunching BOINC projects. ( Intuition tells me no, but if it is yes I will be pleasantly surprised, as it is very easy to get GPUs with a substantial number of CUDA cores).

I've spend some time Googling, but sadly I can find no direct comparison for ATI/ AMD graphics cards. But this does not mean that they can not be used for crunching projects. For anyone looking to use GPU's to crunch projects there is this very helpful resource. In case anyone didn't really want to click the link, it sounds like it is mostly any AMD R600 or R700 platform GPU or later.

One last thing to note, not sure it matters with most semi-recent GPU, but for BOINC projects the card will need to have 256MB or memory designated to the card itself. This memory limit is the bare minimum for most projects ( even for standard CPU processing), with some projects needing more than to run on any given processor.

One more small fun fact about GPU and crunching projects. Most commercial CPU on the market today if not all of them top out at less than 100 GFLOPs even with overclocking. While it is easy to find a GPU for around 100 dollars that easily breaks 100 GFLOPs, if not several times that. A small reminder a FLOP is a floating point operation, the small s denotes that its a measurment of how many of them per second, and the G is Giga with the typical meaning of that prefix in the computing world.

World Community Grid Signature