Advanced search

Forums : Cosmology and Astronomy : Analysis on work done
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Jayargh
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 25 Jun 07
Posts: 508
Credit: 2,282,158
RAC: 0
Message 3309 - Posted: 18 Oct 2007, 1:06:32 UTC
Last modified: 18 Oct 2007, 1:07:24 UTC

Hi Ben and Science group,

Looking at what C@H has produced since Scott said we were doing usable science I figured a few hundred thousand workunits have been completed.

Has any analysis been done so far?

Can you give us any insights on any progress the project has made? Learned anything new?

Any kind of update on this would be appreciated.

Thanks JRenkar
ID: 3309 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 192
Credit: 15,273
RAC: 0
Message 3314 - Posted: 18 Oct 2007, 10:32:07 UTC
Last modified: 18 Oct 2007, 10:40:40 UTC

Hi JRenkar -

Funny you should ask... Chad and I are actually nearly finished writing up a paper that will be the first paper to use Cosmology@Home results.

The paper reports on an idea and a code that Chad wrote to implement it that allows compressing the information from the results of the C@H runs and additional supercomputing runs we have done. Then our code is able to reproduce all of these calculations very quickly (milliseconds as opposed to hours per WU) and with high accuracy.

In addition, our code equally quickly interpolates between the C@H runs. So if we want to know the properties of the cosmic microwave background in a model that's somewhere in between models that were run on C@H, we have checked that our code gives very accurate results for those too.

So by releasing this code we are in effect making the entire set of results of C@H so far available to everyone in cosmology! Our colleagues just need to download our code and a few MB of data.

We expect that our code will be used very broadly in the cosmology community. In particular our collaborators within the Planck project are already planning to use our code on simulations of Planck data.

I'll keep you posted on our progress!

One thing we still need to decide is how to best acknowledge and credit the contributions by everyone on C@H. I was thinking that we could thank all the contributors as a group and put a URL into the paper that links to a snapshot of the Top C@H Contributors page on the day we submit.

Please don't comment on the acknowledgment idea here since this thread is about science, not politics :). Any comments on these issues should go into another forum. I am sure our Forum moderators are going to continue doing a great job organizing the discussion here.

A final comment: this does not mean that C@H is done! So far we have run in a fairly restricted model space and there are several directions in theory space that we can begin exploring. Also, we are currently working on a new application for C@H (with a long run time of several days) which will remove the final known source of theoretical uncertainty in predicting the properties of the cosmic microwave background. This will have to be done before analyzing the Planck data because current codes are not accurate enough. Lots of things can come out of that apart from higher precision all around - for example reliable information from the CMB on the universal Helium abundance.

Please let me know your thoughts on our work, as always!

All the best,

Ben

Creator of Cosmology@Home
ID: 3314 · Report as offensive     Reply Quote
Profile Martin Beltov

Send message
Joined: 3 Sep 07
Posts: 24
Credit: 270,854
RAC: 0
Message 3319 - Posted: 18 Oct 2007, 20:56:11 UTC

Thank you so much for this information,Professor Wandelt
ID: 3319 · Report as offensive     Reply Quote
rbpeake

Send message
Joined: 27 Jun 07
Posts: 118
Credit: 61,883
RAC: 0
Message 3346 - Posted: 22 Oct 2007, 16:14:01 UTC - in response to Message 3314.  

...Funny you should ask... Chad and I are actually nearly finished writing up a paper that will be the first paper to use Cosmology@Home results....

All the best,

Ben

I was not aware that CAH at this stage of its Alpha-Beta existence was doing scientifically interesting work! I thought the current purpose of our crunching was just to prepare CAH for general release, working out the bugs so to speak. So this is a very pleasant surprise!

At some point an announcement and/or link to this paper imho should be placed under the News section on the main project page. This is indeed big news, and serves as an additional motivator to those of us who love doing meaningful science and were not aware we are already at that stage! :)

ID: 3346 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 192
Credit: 15,273
RAC: 0
Message 3469 - Posted: 24 Oct 2007, 20:32:23 UTC - in response to Message 3346.  
Last modified: 24 Oct 2007, 20:32:52 UTC


I was not aware that CAH at this stage of its Alpha-Beta existence was doing scientifically interesting work! I thought the current purpose of our crunching was just to prepare CAH for general release, working out the bugs so to speak. So this is a very pleasant surprise!

At some point an announcement and/or link to this paper imho should be placed under the News section on the main project page. This is indeed big news, and serves as an additional motivator to those of us who love doing meaningful science and were not aware we are already at that stage! :)


Well, we are not in the business of wasting your precious CPU cycles! So we made sure that our tests were done with scientifically meaningful work units.

We haven't announced it officially yet, because we are not completely done with the paper but it should be submitted to a journal very shortly. When we are at that stage we will of course let everyone know.

All the best,

Ben


Creator of Cosmology@Home
ID: 3469 · Report as offensive     Reply Quote
Profile MSE29

Send message
Joined: 3 Jul 07
Posts: 30
Credit: 2,616,948
RAC: 0
Message 3523 - Posted: 26 Oct 2007, 15:56:46 UTC - in response to Message 3469.  

Many thanks for your efforts to let us know how as it stands!

Thanks a lot!
ID: 3523 · Report as offensive     Reply Quote
Rapture
Avatar

Send message
Joined: 27 Oct 07
Posts: 85
Credit: 661,330
RAC: 0
Message 4548 - Posted: 21 Jan 2008, 22:21:50 UTC - in response to Message 3314.  


A final comment: this does not mean that C@H is done! So far we have run in a fairly restricted model space and there are several directions in theory space that we can begin exploring. Also, we are currently working on a new application for C@H (with a long run time of several days) which will remove the final known source of theoretical uncertainty in predicting the properties of the cosmic microwave background. This will have to be done before analyzing the Planck data because current codes are not accurate enough. Lots of things can come out of that apart from higher precision all around - for example reliable information from the CMB on the universal Helium abundance.


Ben, is it possible to eliminate all uncertainties regarding any unknown parameter of the CMB? It looks like the upcoming new application has the potential of making significant discoveries about the universe!

Bill
ID: 4548 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 192
Credit: 15,273
RAC: 0
Message 4672 - Posted: 24 Jan 2008, 16:56:07 UTC - in response to Message 4548.  


A final comment: this does not mean that C@H is done! So far we have run in a fairly restricted model space and there are several directions in theory space that we can begin exploring. Also, we are currently working on a new application for C@H (with a long run time of several days) which will remove the final known source of theoretical uncertainty in predicting the properties of the cosmic microwave background. This will have to be done before analyzing the Planck data because current codes are not accurate enough. Lots of things can come out of that apart from higher precision all around - for example reliable information from the CMB on the universal Helium abundance.


Ben, is it possible to eliminate all uncertainties regarding any unknown parameter of the CMB? It looks like the upcoming new application has the potential of making significant discoveries about the universe!

Bill


One can always improve the accuracy of the theory - but beyond some point it does not make sense to improve the accuracy since the amount of data is always limited.

Here is an analogy. Let\'s say it\'s an election year (hmm....), there are several candidates, and candidates A and B are the most popular ones. You have come up with a theory that in the election candidate B will get half of the votes of candidate A, plus/minus 10%. You have limited resources so you can only poll 12 people. Without getting into the details, suffice it to say that such a small sample will give you a statistical error much larger than 10%.

You find you have some extra resources (time, money) on your hands. Do improve the theory (reduce the 10%) or do you improve your data collection (polling more people)?

It\'s pretty clear that you ought to try to poll more people.

Why? The theory predicts something at a certain level of accuracy (10%), but this level of accuracy is more than enough given the limited amount of data you have access to. You would gain more by investing any additional resources into increasing your data set (polling more people).

Let\'s say instead that your data collection operation is flush with resources and you are polling 2000 people. At this point the statistical error has decreased to the point where the theoretical error of 10% is much higher than your statistical error. Now it makes sense to invest additional resources on the theoretical side.

Of course you can still test your fuzzy theory with the good data set, but the point is that you now have enough data to test a more accurate theory. Constructing a more accurate theory will likely teach you something, because you will have to refine your assumptions, discover additional relationships in the data etc. For different assumptions you will get different theoretical predictions and the great thing is that you can now test them because you have data of the necessary quality.

It\'s similar in any scientific subject, including cosmology. As the observations get better it makes sense to work on the theoretical end to make the comparison between data and theory sharper. In the process we learn something.

For the current data camb is doing a great job. With the Planck satellite launching this year there are some areas of the camb code that need sprucing up so that we are ready for the Planck data when it comes.

All the best,

Ben

Creator of Cosmology@Home
ID: 4672 · Report as offensive     Reply Quote
Profile Jayargh
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 25 Jun 07
Posts: 508
Credit: 2,282,158
RAC: 0
Message 4676 - Posted: 24 Jan 2008, 17:13:56 UTC
Last modified: 24 Jan 2008, 17:14:41 UTC

snip..

Also, we are currently working on a new application for C@H (with a long run time of several days)


Optimization of that application may be imperative to reduce runtimes and retain slower computers in the efforts.

(Err posting this here in analysis done thread because I just really noticed that tidbit here.)
ID: 4676 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 192
Credit: 15,273
RAC: 0
Message 4693 - Posted: 24 Jan 2008, 20:38:51 UTC - in response to Message 4676.  

snip..

Also, we are currently working on a new application for C@H (with a long run time of several days)


Optimization of that application may be imperative to reduce runtimes and retain slower computers in the efforts.

(Err posting this here in analysis done thread because I just really noticed that tidbit here.)


The run-time is not because of poor optimization. Some problems just need a lot of CPU. And long run-times are perfect for distributed computing, because the overhead of distributing and receiving work packages is a lot less compared to short work-packages.

All the best,
Ben
Creator of Cosmology@Home
ID: 4693 · Report as offensive     Reply Quote
Profile Jayargh
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 25 Jun 07
Posts: 508
Credit: 2,282,158
RAC: 0
Message 4696 - Posted: 24 Jan 2008, 20:56:18 UTC - in response to Message 4693.  

snip..

Also, we are currently working on a new application for C@H (with a long run time of several days)


Optimization of that application may be imperative to reduce runtimes and retain slower computers in the efforts.

(Err posting this here in analysis done thread because I just really noticed that tidbit here.)


The run-time is not because of poor optimization. Some problems just need a lot of CPU. And long run-times are perfect for distributed computing, because the overhead of distributing and receiving work packages is a lot less compared to short work-packages.

All the best,
Ben


Ben I whole-heartedly agree with your analysis...however...my point was that if it takes a few days on C2D tech then it might take a week or more for older technology and some people may not wish to run work that takes that long....perhaps you will have different applications at that point to still run modelling for the slower machines instead of data analysis. Many projects offer the option of running short vs long work by different applications of various projects within a project.

Optimizations can also occur by having applications recognize a machine is capable of running SSE2,SSE3 or 3Dnow and 64bit vs 32bit to help speed up crunchtimes.
Regards-Jeff
ID: 4696 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 192
Credit: 15,273
RAC: 0
Message 4719 - Posted: 24 Jan 2008, 23:12:41 UTC - in response to Message 4696.  


Ben I whole-heartedly agree with your analysis...however...my point was that if it takes a few days on C2D tech then it might take a week or more for older technology and some people may not wish to run work that takes that long....perhaps you will have different applications at that point to still run modelling for the slower machines instead of data analysis. Many projects offer the option of running short vs long work by different applications of various projects within a project.

Optimizations can also occur by having applications recognize a machine is capable of running SSE2,SSE3 or 3Dnow and 64bit vs 32bit to help speed up crunchtimes.
Regards-Jeff


I agree - I should have said that this will be a separate executable so people will be able to control whether they want to contribute to the long runs or not. The code will also be less battle=tested than the camb code, so there will be some experimentation involved. This is what research is like!

Ben
Creator of Cosmology@Home
ID: 4719 · Report as offensive     Reply Quote
Rapture
Avatar

Send message
Joined: 27 Oct 07
Posts: 85
Credit: 661,330
RAC: 0
Message 4839 - Posted: 26 Jan 2008, 14:33:55 UTC - in response to Message 4672.  
Last modified: 26 Jan 2008, 14:39:09 UTC

How much will Cosmology@Home crunch when the Planck satellite data arrives? It looks like this will keep us busy for a long time to come.

ID: 4839 · Report as offensive     Reply Quote
Profile Scott
Volunteer moderator
Project administrator
Project developer
Avatar

Send message
Joined: 1 Apr 07
Posts: 662
Credit: 13,742
RAC: 0
Message 4850 - Posted: 27 Jan 2008, 5:04:32 UTC - in response to Message 4839.  

How much will Cosmology@Home crunch when the Planck satellite data arrives? It looks like this will keep us busy for a long time to come.


Oh man, I get scared just thinking about it. =)

I suppose it depends on how many people are actively crunching with us at the time. But, yeah, we\'ll be working with the Planck data for years.
Scott Kruger
Project Administrator, Cosmology@Home
ID: 4850 · Report as offensive     Reply Quote
marj

Send message
Joined: 1 Oct 07
Posts: 7
Credit: 6,550
RAC: 0
Message 4855 - Posted: 27 Jan 2008, 16:10:23 UTC - in response to Message 4696.  

...however...my point was that if it takes a few days on C2D tech then it might take a week or more for older technology and some people may not wish to run work that takes that long....perhaps you will have different applications at that point to still run modelling for the slower machines instead of data analysis. Many projects offer the option of running short vs long work by different applications of various projects within a project.


Oh my! For those of us used to crunching climateprediction models, a mere week or more sounds like a breeze! ;-)
ID: 4855 · Report as offensive     Reply Quote
Profile JohnMD

Send message
Joined: 31 Jan 08
Posts: 4
Credit: 100,027
RAC: 0
Message 4908 - Posted: 3 Feb 2008, 19:39:50 UTC

\"Oh my! For those of us used to crunching climateprediction models, a mere week or more sounds like a breeze! ;-)
\"
Well said Marj !
However, long WU\'s require pretty watertight checkpoint-restart facilities built in to the project. My faith in Windows isn\'t enough to justify embarking on a CPDN-model if it requires around 4 months\' cpu without a single OS crash.
ID: 4908 · Report as offensive     Reply Quote
marj

Send message
Joined: 1 Oct 07
Posts: 7
Credit: 6,550
RAC: 0
Message 4917 - Posted: 4 Feb 2008, 11:47:28 UTC - in response to Message 4908.  

My faith in Windows isn\'t enough to justify embarking on a CPDN-model if it requires around 4 months\' cpu without a single OS crash.


well you could always backup and restore. It\'s a part of normal life over on cpdn :-)
ID: 4917 · Report as offensive     Reply Quote
Profile Ananas

Send message
Joined: 19 Jan 08
Posts: 180
Credit: 2,500,290
RAC: 0
Message 4969 - Posted: 12 Feb 2008, 2:17:27 UTC - in response to Message 4917.  

I have one 94% HadCM3 on a 2xP3/1266 and hope to finish it just before the deadline (which is set to 1 year!)
ID: 4969 · Report as offensive     Reply Quote
rbpeake

Send message
Joined: 27 Jun 07
Posts: 118
Credit: 61,883
RAC: 0
Message 7095 - Posted: 20 Aug 2008, 21:27:41 UTC

Just curious about what is being done recently with the results that we have been producing? Any papers in the works, or are we essentially \"fine-tuning\" the CAMB application until the Planck data starts coming in? (And btw, when is it anticipated that Planck data will be available, assuming all goes according to plan?)

Thanks! Always interested with what happens with the results of our crunching!
ID: 7095 · Report as offensive     Reply Quote
Profile Benjamin Wandelt
Volunteer moderator
Project administrator
Project scientist
Avatar

Send message
Joined: 24 Jun 07
Posts: 192
Credit: 15,273
RAC: 0
Message 7096 - Posted: 21 Aug 2008, 4:22:25 UTC - in response to Message 7095.  
Last modified: 21 Aug 2008, 4:23:02 UTC

Just curious about what is being done recently with the results that we have been producing? Any papers in the works, or are we essentially \"fine-tuning\" the CAMB application until the Planck data starts coming in? (And btw, when is it anticipated that Planck data will be available, assuming all goes according to plan?)

Thanks! Always interested with what happens with the results of our crunching!


Hi -

The most recent code version is a major update to CAMB, including a broader range of dark energy models.

Here\'s the thing: all dark energy missions, both from the ground and from space, assume that Planck flies, Planck sends down data according to plan, and the Planck data gets analyzed. When the time comes we will want to analyze the data from Planck and from those missions together. So we are currently increasing the number of parameters describing the properties of the dark energy, to find out in detail how much we will be able to learn from these planned dark energy related observations.

Once the results from the current CAMB version are in, we are ready to update these predictions. That will also be a good time to update the overall predictions for what Planck will be able to do, because Planck has just completed its final system tests at the Coordinated Science Laboratory in Liege (see the other thread, where I posted a link to some pictures of Planck being moved into the space simulating test chamber).

Planck is projected to fly in February 2009. It will take Planck 3 months to get to L2 (the second Lagrange point of the Sun-Earth system) and to start observations from there. It\'ll take 6 months to make a full sky map. Planck will make at least 2 full sky maps, so that\'s 1 year of data. We will continue to analyze these data while Planck operates. How long Planck will be allowed to operate depends on how long the liquid helium lasts and how long ESA keeps funding Planck ground operations - that could be much longer than 1 year.

The first data release will happen two years after the first year of data.

So the results of the analysis will be released 39 months after launch. If the launch is in February of 2009, that would be May of 2012 - just in time for the Dark Energy Survey, and other dark energy missions to make use of the data.

Data obtained after the first 12 months of observations will be released later, on a schedule to be determined.

Let me know if this answers most of your questions.

All the best,
Ben
Creator of Cosmology@Home
ID: 7096 · Report as offensive     Reply Quote
1 · 2 · Next

Forums : Cosmology and Astronomy : Analysis on work done