1) Forums : Technical Support : Tasks will not complete (Message 11347)
Posted 1 Jul 2012 by Nuadormrac
Running along the same lines are a few work units I have been getting on my 64 bit Fedora 16 Linux machine.
My Windows XP 32 bit machines do not seem to have this problem at all.

I have been getting a number of work units that have run way longer than most work units usually do.
Instead of say 14 to 16 hours which is the usual run times I have been doing they have run for as long as 32 hours or more. Plus they still get the standard 420 points for the effort.

I have just aborted two work units that appear to have restarted during their normal running time.
One had run for 40 hours, after 30 hours it was at 77% with 7 to go, this morning it was at 40 hours - 66% and 17 hours to go.
The other just stopped after 10 hours with 3 hours to go, at 77.496% for a number of minutes then restarted at 60% with 7 hours to go, so I aborted it as well.

This is only affecting my 64 bit Linux machine, very curious.
Just something else to think about.

I am posting this for information only as I have stopped work for the time being, so wont be doing any more work units.


I had run into what seems to be a similar problem also with a task I just aborted on comming home. I left for lunch it was at 96+%, and 29 hours running, got home, and it was back at, well 60%... It seems to get itself in an infinite loop, though on my case this is on Windows 7 Professional x64... I've noticed this with a handful of tasks, which over the past month where my team was running this project as PotM would amount to perhaps 1%... It's not a significant percentage of tasks, and is much smaller then those that complete without incident after it sucessfully downloads tasks, but there's a couple. From past experience, they just get themselves stuck and go back and re-process what was done previously.

It's certainly an issue worth documenting, though if there's back and forth on it existing, eh, I'll stay outa any politics that might be going on, if some of what I've read in some of the above replies suggests any angst between some participants...
2) Forums : General Topics : Credits (Message 8720)
Posted 19 Nov 2009 by Nuadormrac
It's come up in this thread a little, along with the long thread on credits which is now locked (and kinda needs to stop growing if threads can't be broken into seperate pages, given that it's already difficult to load really long ones on dialup connections).

Anyhow, with all the comparison of this project vs. that one, and talk of credit parities; crunch time is not the only resource a project can be taking. RAM allocation size is also a resource that can be taken up, and as I remember (I have to go from memory, as cosmology won't let me download a WU now, given the giving up on download, file not found errors), Cosmology was showing a much higher memory size allocation for it's thread (via inspection in Task Manager), then other projects. I'm talking over 100 MB. The actual commit charge on the machine when it loaded, actually went up a fair bit (more like 200-300 MB before and after process loading by BOINC), and considering 1 GB of RAM (which is admittedly a little low these days), represented a fair chunk of RAM. Given the OS was taking about 300 MB, along with AV software and drivers, we're talking over half the RAM pool in this case.

Not a complaint, or anything of the sort; but the higher credit vs some projects, really did seem fair compensation considering the RAM utilization element, especially where some resource intensive uses (such as gaming) might take place on the same machine, and a lot of RAM used can also mean a lot more swaping, when well...

Now on the 1 GB (it's a uni-core), and yeah I'd prefer 2 GB though money is tight; if one had a duel or quad core; it should also be noted that many newer machines would also add the issue of RAM available per core, given they could load 2 or 4 WUs onto it. Obviously each core would be processing stuff, which would be taking memory to itself, aka the commit charge for the given task x the number of times it's running (on each core).

As long as other resources are being taken up in greater measure then on other projects, I do think it's fair to have somewhat higher credit returns (as had been previously/perhaps still is), given the utilization demand it puts on the machine, and then the amount of available RAM left to one's programs. The flip side, is that if all was absolutely equal (really not possible), if one was really after credits (vs "doing for the science"), they'd probably then favor the project which puts a smaller burden on their RAM pool. This might especially hold if a person is actively using the comp to do something which needs real time performance at times, and they start noticing a lot of swaping/hard drive thrashing during such times of their own activity, as things are beginning to feel "a bit laggy".
3) Forums : Technical Support : URGENT PROBLEMS THREAD (2009 and after) (Message 8718)
Posted 19 Nov 2009 by Nuadormrac
Slight problem with downloading new work. It goes to request a new task, but then gives up on downloading in mere seconds, and then fails.

11/19/2009 2:15:53 AM Cosmology@Home Sending scheduler request: To fetch work.
11/19/2009 2:15:53 AM Cosmology@Home Requesting new tasks
11/19/2009 2:15:58 AM Cosmology@Home Scheduler request completed: got 1 new tasks
11/19/2009 2:16:00 AM Cosmology@Home Started download of params_110209_081824_1.ini
11/19/2009 2:16:02 AM Cosmology@Home Giving up on download of params_110209_081824_1.ini: file not found

It's saying the file isn't found, I'm presuming on the server itself, in order to be downloaded to the client. This happened twice before I NNT'd it to prevent my allowable queue from falling to it.