Advanced search

Forums : Technical Support : Memory workload
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Profile TheHarperdragon
Avatar

Send message
Joined: 16 Jan 09
Posts: 4
Credit: 281,001
RAC: 0
Message 8378 - Posted: 29 May 2009, 21:14:22 UTC

Why is it that Cosmology work units are using anywhere from 6 to 10 times as much memory as any other BOINC project? I'm running this on 4 different systems, all dual core or better, and this problem is consistent across the board.
Beware of Dragons - for you are crunchy and good with ketchup!
ID: 8378 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 188
Credit: 15,273
RAC: 0
Message 8390 - Posted: 31 May 2009, 20:32:35 UTC - in response to Message 8378.  

The amount of memory that's used is simply related to the complexity of the calculation we are doing.

Do you feel that using a large amount of memory is a problem and if so, why?

I am particularly interested because I have been thinking about a second application for C@H that would be very memory intensive.

All the best,

Ben



Creator of Cosmology@Home
ID: 8390 · Report as offensive     Reply Quote
Profile TheHarperdragon
Avatar

Send message
Joined: 16 Jan 09
Posts: 4
Credit: 281,001
RAC: 0
Message 8391 - Posted: 31 May 2009, 21:47:34 UTC - in response to Message 8390.  

The worst of the problem is that as I am (was - taking a break for the moment) running two work units, if I had
not increased my ram to 4 Gb, my computer started to slow down drastically. My
3 other crunchers run only BOINC, so it isn't a problem there. It just seems to me that the amount of memory that these calculations use are a little (lot) excessive in relation to other projects out there.
And yet Cosmology is one of my favorite projects to run .... go figure
Beware of Dragons - for you are crunchy and good with ketchup!
ID: 8391 · Report as offensive     Reply Quote
Emanuel

Send message
Joined: 28 Oct 07
Posts: 31
Credit: 316,100
RAC: 0
Message 8392 - Posted: 1 Jun 2009, 8:08:11 UTC - in response to Message 8391.  
Last modified: 1 Jun 2009, 8:23:36 UTC

Yeah, current CAMB WUs are extremely resource hungry, and this is problematic as using up so much RAM decreases system responsiveness like nothing else. I've actually found that the best way to mitigate the impact of this is to give it free reign over my system (setting resources to 100% of everything) to at least keep it from swapping as much as possible, but this is still not quite enough. Running it on my quad core, 4 CAMB units running in parallel can take up virtually all 2GiB of RAM in my system.

This makes it very hard to keep BOINC on during normal use (my computer isn't idle much except at night), and we've already seen in a few topics that people are being scared off by this - if you could cut memory usage in half and still keep 75% of the speed (while not giving up on accuracy or WU size), for instance, I think it would be to your benefit.

Alternatively, I don't know how you're making use of the memory - if you could walk through it more linearly or make working copies that fit in the processor's L2 cache, that might help too. (only transfer into and out of RAM in bulk, thus taking advantage of burst speed over random access speed)
ID: 8392 · Report as offensive     Reply Quote
Helli

Send message
Joined: 28 Aug 07
Posts: 2
Credit: 6,329,390
RAC: 0
Message 8393 - Posted: 1 Jun 2009, 8:28:41 UTC

For to much Memory usage especially on i7 8-Core Machines, you have to play with
the "Memory usage" Settings in your profile:

Use at most xxx% of memory when computer is in use
Use at most xxx% of memory when computer is not in use


On my i7 Rig with 6GB of RAM i have to set both Settings to 50%. If i leave it
to the Standard values of 90%, i got only 500MB (of 6GB) free Ram. The only
negative thing of reduce the Memory usage is, only 6 Client Applications are
running, the other two are "Waiting for memory".

Helli
ID: 8393 · Report as offensive     Reply Quote
sygopet

Send message
Joined: 2 Aug 08
Posts: 27
Credit: 204,771
RAC: 0
Message 8394 - Posted: 1 Jun 2009, 10:50:36 UTC - in response to Message 8390.  
Last modified: 1 Jun 2009, 10:55:01 UTC

Do you feel that using a large amount of memory is a problem and if so, why?

No problem at all, if you want to exclude any computer without very large amounts of RAM from participating!
My P4 with 512MB RAM can only get:
Message from server: CAMB needs 476.84 MB RAM but only 459.07 MB is available for use.
It makes no difference how I juggle the memory usage settings. I would be interested to hear from anyone who has managed to get Cosmology to function on 512MB and how they did it.
Since the whole idea of BOINC is to allow work to take place in the background without (unreasonably) affecting the computers primary tasks I would say that Cosmology needs to go back to the drawing board and design work units that need less than 100MB RAM to function. (I had no such problem with app version 2.15 - can we return to that?!)
I support the originator of this thread and many others who have tried to get this message through.
There is no warning for anyone attempting to join the project that RAM is critical.
There is little evidence that Cosmology staff are taking note.
ID: 8394 · Report as offensive     Reply Quote
Profile Misfit
Volunteer tester
Avatar

Send message
Joined: 9 Jun 07
Posts: 150
Credit: 237,789
RAC: 0
Message 8396 - Posted: 1 Jun 2009, 16:50:31 UTC

I have 4GB of ram and that's been taken up to 80% with 2 instances of cosmo running.
me@rescam.org
ID: 8396 · Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 11 Dec 07
Posts: 420
Credit: 270,580
RAC: 0
Message 8397 - Posted: 2 Jun 2009, 3:43:57 UTC - in response to Message 8394.  


Since the whole idea of BOINC is to allow work to take place in the background without (unreasonably) affecting the computers primary tasks I would say that Cosmology needs to go back to the drawing board and design work units that need less than 100MB RAM to function. (I had no such problem with app version 2.15 - can we return to that?!)


2.15 took around 300MB, but could've gone as high as 500MB. Most of the time during the run it would be 100-150MB, but late in the run the amount used would go up. Asking for less than 100MB is probably not realistic.

That said though, now tasks take 500-800MB on a consistent basis. The change in memory requirements was foisted upon us volunteers right after the server outage a few months ago. There was no prior warning of this, but it was at least acknowledged after several days that there was an increase. It is troublesome to see such a disconnect between not only the volunteers and the staff, but even between the staff themselves...


ID: 8397 · Report as offensive     Reply Quote
Profile Misfit
Volunteer tester
Avatar

Send message
Joined: 9 Jun 07
Posts: 150
Credit: 237,789
RAC: 0
Message 8398 - Posted: 2 Jun 2009, 4:52:20 UTC - in response to Message 8397.  

but even between the staff themselves...

There doesn't appear to be much staff left anymore.
me@rescam.org
ID: 8398 · Report as offensive     Reply Quote
Rapture
Avatar

Send message
Joined: 27 Oct 07
Posts: 85
Credit: 661,312
RAC: 33
Message 8403 - Posted: 3 Jun 2009, 20:47:50 UTC - in response to Message 8390.  

The amount of memory that's used is simply related to the complexity of the calculation we are doing.

Do you feel that using a large amount of memory is a problem and if so, why?

I am particularly interested because I have been thinking about a second application for C@H that would be very memory intensive.



I have not had any problem running the work received. The only indication I have is BOINC message in status field of 'waiting for memory' but this is temporary and goes away and the work resumes. I believe this has someting to do with my computer and not with Cosmology@home. My BOINC manager is set at 10 GB of space and my computer has 2 GB of ram memory. This is more than enough for any work to process!
ID: 8403 · Report as offensive     Reply Quote
Profile Labbie
Avatar

Send message
Joined: 8 Nov 07
Posts: 64
Credit: 859,370
RAC: 0
Message 8404 - Posted: 4 Jun 2009, 2:18:34 UTC - in response to Message 8403.  

The amount of memory that's used is simply related to the complexity of the calculation we are doing.

Do you feel that using a large amount of memory is a problem and if so, why?

I am particularly interested because I have been thinking about a second application for C@H that would be very memory intensive.



I have not had any problem running the work received. The only indication I have is BOINC message in status field of 'waiting for memory' but this is temporary and goes away and the work resumes. I believe this has someting to do with my computer and not with Cosmology@home. My BOINC manager is set at 10 GB of space and my computer has 2 GB of ram memory. This is more than enough for any work to process!


That "waiting for memory" message appears because you don't have enough free RAM to run the WUs at the same time due to the high RAM usage.

The app size at Cosmo is ridiculously high. I can't run 4 WUs at the same time on my Q9450 with 4GB RAM.



Calm Chaos Forum...Join Calm Chaos Now
ID: 8404 · Report as offensive     Reply Quote
TCU Computer Science

Send message
Joined: 28 Oct 07
Posts: 3
Credit: 9,446,460
RAC: 0
Message 8405 - Posted: 4 Jun 2009, 3:24:31 UTC - in response to Message 8390.  

The amount of memory that's used is simply related to the complexity of the calculation we are doing.

Do you feel that using a large amount of memory is a problem and if so, why?




When a workunit consumes over 18 hours of CPU time and then aborts with
forrtl: severe (41): insufficient virtual memory
that is a problem.

I have disabled Cosmology on most of my computers because that wasted CPU time is extremely irritating. If you can prevent sending workunits to computers with insufficient memory, I will leave Cosmology enabled.
ID: 8405 · Report as offensive     Reply Quote
Anshul Kanakia
Volunteer moderator
Project administrator
Project developer

Send message
Joined: 30 Sep 08
Posts: 70
Credit: 164,860
RAC: 0
Message 8406 - Posted: 4 Jun 2009, 4:32:59 UTC

I am starting work on the CAMB application today to reduce its memory and processor usage as much as possible. I have sent Scott an email about this also and we should be meeting later this week to discuss as couple of different possible ways to help reduce the burden of the WUs on user machines. I will obviously be checking the code once again very thoroughly just in case there are any memory leaks I can plug up and also try to get a more efficient multi-threaded version of CAMB up which should help with the run time issue per WU at least. As of now I believe the app creates 4 threads per WU, or so my resource monitor says. I will have updates on my progress early next week.
ID: 8406 · Report as offensive     Reply Quote
sygopet

Send message
Joined: 2 Aug 08
Posts: 27
Credit: 204,771
RAC: 0
Message 8408 - Posted: 4 Jun 2009, 10:15:55 UTC - in response to Message 8406.  

I am starting work on the CAMB application today to reduce its memory and processor usage as much as possible.


Thanks for that update - it would be good to get my old machine back into action on Milky Way.

It occurs to me that if your (in-the-future) project has a need for lots more memory it could still be run as an optional "Double Creamy" Way, in the same way that Climate Prediction, Seti and possibly other projects have (or had) and allow the client to decide if they want to risk the potential thrombosis.

Or stick with the safer "semi-skimmed" (= half and half) for those of us with older machines under doctor's orders.
ID: 8408 · Report as offensive     Reply Quote
sygopet

Send message
Joined: 2 Aug 08
Posts: 27
Credit: 204,771
RAC: 0
Message 8409 - Posted: 4 Jun 2009, 15:57:49 UTC - in response to Message 8408.  

Oops, sorry everyone - I'm not concentrating!
Any reference in message 8408 to any project other than Cosmology is unintended and irrelevant.
I'm going back to catch up on my sleep.
ID: 8409 · Report as offensive     Reply Quote
Profile Misfit
Volunteer tester
Avatar

Send message
Joined: 9 Jun 07
Posts: 150
Credit: 237,789
RAC: 0
Message 8410 - Posted: 4 Jun 2009, 16:33:26 UTC - in response to Message 8409.  

Oops, sorry everyone - I'm not concentrating!
Any reference in message 8408 to any project other than Cosmology is unintended and irrelevant.
I'm going back to catch up on my sleep.

Don't worry about it. In space no one can hear you scream.
me@rescam.org
ID: 8410 · Report as offensive     Reply Quote
Montaray Jack

Send message
Joined: 13 May 09
Posts: 2
Credit: 1,131,060
RAC: 0
Message 8424 - Posted: 10 Jun 2009, 5:52:15 UTC
Last modified: 10 Jun 2009, 6:08:00 UTC

Strangely enough someone on another forum (amdzone)just asked me about system requirements for this project. A link to the minimum system requirements or note on the download page would be helpful.

I've seen the waiting for memory message on my machine (debian linux, dual Shanghai Opterons) also with 6+GB of RAM (8GB real, 6+GB due to memory hole dependant on BIOS settings)
Might the problem be from the 32 to 64 bit split @ 2GB (or @3GB or4 gb again dependant on the OS and settings)? I haven't seen the warning when the machine saw all 8GB though.

OT: If you do a fix and respin, what compiler will you use? gcc and gfortran 4.3 and 4.4 have increased performance considerably across the board. Its getting close to icc in terms of efficient code(better on certain code). Open64 gets much improved performance on AMD processors, and I think it makes pretty good itanium code(intel wrote the itanium backend HP maintains it). icc takes the slowest path on AMD processors, but makes the best code for Intel.

edit:
@Anshul Kanakia: haven't noticed any memory leaks on the linux port, if they exist, they are very small. It doesn't even page to disk.
ID: 8424 · Report as offensive     Reply Quote
LochDhu

Send message
Joined: 15 Jul 09
Posts: 1
Credit: 4,513,491
RAC: 0
Message 8496 - Posted: 23 Jul 2009, 18:16:11 UTC

It sounds like your are making use of the memory, so don't think this is really a problem. It has always been a caveat of distributed computing, not all computers can run all projects. However, I think you should warn potential participants about the memory needs of this project on the "How to Join" page.

On my dual core laptop with 1.5GB, it cause most of my other applications to page out to disk. It's a slow laptop hard drive, so it was very noticeable. I suspended the project for now, but I'll let it finish tonight when only BOINC runs, then detach. I'll happily give C@H a core of my desktop since it has 4GB. It just would have been better to known about this before signing up.
ID: 8496 · Report as offensive     Reply Quote
Scrooby

Send message
Joined: 13 Feb 08
Posts: 77
Credit: 616,140
RAC: 0
Message 8507 - Posted: 5 Aug 2009, 18:54:04 UTC - in response to Message 8390.  
Last modified: 5 Aug 2009, 18:55:17 UTC

I have been thinking about a second application for C@H that would be very memory intensive.

All the best,

Ben



Wouldnt it be a good idea to finally get BOINC working on this server before creating different jobs that will get stuck and break etc in newer and even more more horrible ways than we see now?
ID: 8507 · Report as offensive     Reply Quote
Professor Ray

Send message
Joined: 16 Jul 09
Posts: 7
Credit: 72,901
RAC: 40
Message 8554 - Posted: 24 Aug 2009, 9:04:50 UTC

I've detached from the project. Every single one of the WU's I've processed since I've joined have resulted in 0 credit due to "compute errors".
ID: 8554 · Report as offensive     Reply Quote
1 · 2 · 3 · Next

Forums : Technical Support : Memory workload