Forums :
Cosmology and Astronomy :
Analysis on work done
Message board moderation
Author | Message |
---|---|
![]() Volunteer moderator Volunteer tester ![]() Send message Joined: 25 Jun 07 Posts: 508 Credit: 2,282,158 RAC: 0 |
Hi Ben and Science group, Looking at what C@H has produced since Scott said we were doing usable science I figured a few hundred thousand workunits have been completed. Has any analysis been done so far? Can you give us any insights on any progress the project has made? Learned anything new? Any kind of update on this would be appreciated. Thanks JRenkar |
![]() Volunteer moderator Project administrator Project scientist ![]() Send message Joined: 24 Jun 07 Posts: 192 Credit: 15,273 RAC: 0 |
Hi JRenkar - Funny you should ask... Chad and I are actually nearly finished writing up a paper that will be the first paper to use Cosmology@Home results. The paper reports on an idea and a code that Chad wrote to implement it that allows compressing the information from the results of the C@H runs and additional supercomputing runs we have done. Then our code is able to reproduce all of these calculations very quickly (milliseconds as opposed to hours per WU) and with high accuracy. In addition, our code equally quickly interpolates between the C@H runs. So if we want to know the properties of the cosmic microwave background in a model that's somewhere in between models that were run on C@H, we have checked that our code gives very accurate results for those too. So by releasing this code we are in effect making the entire set of results of C@H so far available to everyone in cosmology! Our colleagues just need to download our code and a few MB of data. We expect that our code will be used very broadly in the cosmology community. In particular our collaborators within the Planck project are already planning to use our code on simulations of Planck data. I'll keep you posted on our progress! One thing we still need to decide is how to best acknowledge and credit the contributions by everyone on C@H. I was thinking that we could thank all the contributors as a group and put a URL into the paper that links to a snapshot of the Top C@H Contributors page on the day we submit. Please don't comment on the acknowledgment idea here since this thread is about science, not politics :). Any comments on these issues should go into another forum. I am sure our Forum moderators are going to continue doing a great job organizing the discussion here. A final comment: this does not mean that C@H is done! So far we have run in a fairly restricted model space and there are several directions in theory space that we can begin exploring. Also, we are currently working on a new application for C@H (with a long run time of several days) which will remove the final known source of theoretical uncertainty in predicting the properties of the cosmic microwave background. This will have to be done before analyzing the Planck data because current codes are not accurate enough. Lots of things can come out of that apart from higher precision all around - for example reliable information from the CMB on the universal Helium abundance. Please let me know your thoughts on our work, as always! All the best, Ben Creator of Cosmology@Home |
![]() Send message Joined: 3 Sep 07 Posts: 24 Credit: 270,854 RAC: 0 |
Thank you so much for this information,Professor Wandelt |
rbpeake Send message Joined: 27 Jun 07 Posts: 118 Credit: 61,883 RAC: 0 |
...Funny you should ask... Chad and I are actually nearly finished writing up a paper that will be the first paper to use Cosmology@Home results.... I was not aware that CAH at this stage of its Alpha-Beta existence was doing scientifically interesting work! I thought the current purpose of our crunching was just to prepare CAH for general release, working out the bugs so to speak. So this is a very pleasant surprise! At some point an announcement and/or link to this paper imho should be placed under the News section on the main project page. This is indeed big news, and serves as an additional motivator to those of us who love doing meaningful science and were not aware we are already at that stage! :) |
![]() Volunteer moderator Project administrator Project scientist ![]() Send message Joined: 24 Jun 07 Posts: 192 Credit: 15,273 RAC: 0 |
Well, we are not in the business of wasting your precious CPU cycles! So we made sure that our tests were done with scientifically meaningful work units. We haven't announced it officially yet, because we are not completely done with the paper but it should be submitted to a journal very shortly. When we are at that stage we will of course let everyone know. All the best, Ben Creator of Cosmology@Home |
![]() Send message Joined: 3 Jul 07 Posts: 30 Credit: 2,616,948 RAC: 0 |
Many thanks for your efforts to let us know how as it stands! Thanks a lot! |
Rapture![]() Send message Joined: 27 Oct 07 Posts: 85 Credit: 661,330 RAC: 0 |
Ben, is it possible to eliminate all uncertainties regarding any unknown parameter of the CMB? It looks like the upcoming new application has the potential of making significant discoveries about the universe! Bill |
![]() Volunteer moderator Project administrator Project scientist ![]() Send message Joined: 24 Jun 07 Posts: 192 Credit: 15,273 RAC: 0 |
One can always improve the accuracy of the theory - but beyond some point it does not make sense to improve the accuracy since the amount of data is always limited. Here is an analogy. Let\'s say it\'s an election year (hmm....), there are several candidates, and candidates A and B are the most popular ones. You have come up with a theory that in the election candidate B will get half of the votes of candidate A, plus/minus 10%. You have limited resources so you can only poll 12 people. Without getting into the details, suffice it to say that such a small sample will give you a statistical error much larger than 10%. You find you have some extra resources (time, money) on your hands. Do improve the theory (reduce the 10%) or do you improve your data collection (polling more people)? It\'s pretty clear that you ought to try to poll more people. Why? The theory predicts something at a certain level of accuracy (10%), but this level of accuracy is more than enough given the limited amount of data you have access to. You would gain more by investing any additional resources into increasing your data set (polling more people). Let\'s say instead that your data collection operation is flush with resources and you are polling 2000 people. At this point the statistical error has decreased to the point where the theoretical error of 10% is much higher than your statistical error. Now it makes sense to invest additional resources on the theoretical side. Of course you can still test your fuzzy theory with the good data set, but the point is that you now have enough data to test a more accurate theory. Constructing a more accurate theory will likely teach you something, because you will have to refine your assumptions, discover additional relationships in the data etc. For different assumptions you will get different theoretical predictions and the great thing is that you can now test them because you have data of the necessary quality. It\'s similar in any scientific subject, including cosmology. As the observations get better it makes sense to work on the theoretical end to make the comparison between data and theory sharper. In the process we learn something. For the current data camb is doing a great job. With the Planck satellite launching this year there are some areas of the camb code that need sprucing up so that we are ready for the Planck data when it comes. All the best, Ben Creator of Cosmology@Home |
![]() Volunteer moderator Volunteer tester ![]() Send message Joined: 25 Jun 07 Posts: 508 Credit: 2,282,158 RAC: 0 |
snip.. Also, we are currently working on a new application for C@H (with a long run time of several days) Optimization of that application may be imperative to reduce runtimes and retain slower computers in the efforts. (Err posting this here in analysis done thread because I just really noticed that tidbit here.) |
![]() Volunteer moderator Project administrator Project scientist ![]() Send message Joined: 24 Jun 07 Posts: 192 Credit: 15,273 RAC: 0 |
snip.. The run-time is not because of poor optimization. Some problems just need a lot of CPU. And long run-times are perfect for distributed computing, because the overhead of distributing and receiving work packages is a lot less compared to short work-packages. All the best, Ben Creator of Cosmology@Home |
![]() Volunteer moderator Volunteer tester ![]() Send message Joined: 25 Jun 07 Posts: 508 Credit: 2,282,158 RAC: 0 |
snip.. Ben I whole-heartedly agree with your analysis...however...my point was that if it takes a few days on C2D tech then it might take a week or more for older technology and some people may not wish to run work that takes that long....perhaps you will have different applications at that point to still run modelling for the slower machines instead of data analysis. Many projects offer the option of running short vs long work by different applications of various projects within a project. Optimizations can also occur by having applications recognize a machine is capable of running SSE2,SSE3 or 3Dnow and 64bit vs 32bit to help speed up crunchtimes. Regards-Jeff |
![]() Volunteer moderator Project administrator Project scientist ![]() Send message Joined: 24 Jun 07 Posts: 192 Credit: 15,273 RAC: 0 |
I agree - I should have said that this will be a separate executable so people will be able to control whether they want to contribute to the long runs or not. The code will also be less battle=tested than the camb code, so there will be some experimentation involved. This is what research is like! Ben Creator of Cosmology@Home |
Rapture![]() Send message Joined: 27 Oct 07 Posts: 85 Credit: 661,330 RAC: 0 |
How much will Cosmology@Home crunch when the Planck satellite data arrives? It looks like this will keep us busy for a long time to come. |
![]() Volunteer moderator Project administrator Project developer ![]() Send message Joined: 1 Apr 07 Posts: 662 Credit: 13,742 RAC: 0 |
How much will Cosmology@Home crunch when the Planck satellite data arrives? It looks like this will keep us busy for a long time to come. Oh man, I get scared just thinking about it. =) I suppose it depends on how many people are actively crunching with us at the time. But, yeah, we\'ll be working with the Planck data for years. Scott Kruger Project Administrator, Cosmology@Home |
marj Send message Joined: 1 Oct 07 Posts: 7 Credit: 6,550 RAC: 0 |
...however...my point was that if it takes a few days on C2D tech then it might take a week or more for older technology and some people may not wish to run work that takes that long....perhaps you will have different applications at that point to still run modelling for the slower machines instead of data analysis. Many projects offer the option of running short vs long work by different applications of various projects within a project. Oh my! For those of us used to crunching climateprediction models, a mere week or more sounds like a breeze! ;-) |
![]() Send message Joined: 31 Jan 08 Posts: 4 Credit: 100,027 RAC: 0 |
\"Oh my! For those of us used to crunching climateprediction models, a mere week or more sounds like a breeze! ;-) \" Well said Marj ! However, long WU\'s require pretty watertight checkpoint-restart facilities built in to the project. My faith in Windows isn\'t enough to justify embarking on a CPDN-model if it requires around 4 months\' cpu without a single OS crash. |
marj Send message Joined: 1 Oct 07 Posts: 7 Credit: 6,550 RAC: 0 |
My faith in Windows isn\'t enough to justify embarking on a CPDN-model if it requires around 4 months\' cpu without a single OS crash. well you could always backup and restore. It\'s a part of normal life over on cpdn :-) |
![]() Send message Joined: 19 Jan 08 Posts: 180 Credit: 2,500,290 RAC: 0 |
I have one 94% HadCM3 on a 2xP3/1266 and hope to finish it just before the deadline (which is set to 1 year!) |
rbpeake Send message Joined: 27 Jun 07 Posts: 118 Credit: 61,883 RAC: 0 |
Just curious about what is being done recently with the results that we have been producing? Any papers in the works, or are we essentially \"fine-tuning\" the CAMB application until the Planck data starts coming in? (And btw, when is it anticipated that Planck data will be available, assuming all goes according to plan?) Thanks! Always interested with what happens with the results of our crunching! |
![]() Volunteer moderator Project administrator Project scientist ![]() Send message Joined: 24 Jun 07 Posts: 192 Credit: 15,273 RAC: 0 |
Just curious about what is being done recently with the results that we have been producing? Any papers in the works, or are we essentially \"fine-tuning\" the CAMB application until the Planck data starts coming in? (And btw, when is it anticipated that Planck data will be available, assuming all goes according to plan?) Hi - The most recent code version is a major update to CAMB, including a broader range of dark energy models. Here\'s the thing: all dark energy missions, both from the ground and from space, assume that Planck flies, Planck sends down data according to plan, and the Planck data gets analyzed. When the time comes we will want to analyze the data from Planck and from those missions together. So we are currently increasing the number of parameters describing the properties of the dark energy, to find out in detail how much we will be able to learn from these planned dark energy related observations. Once the results from the current CAMB version are in, we are ready to update these predictions. That will also be a good time to update the overall predictions for what Planck will be able to do, because Planck has just completed its final system tests at the Coordinated Science Laboratory in Liege (see the other thread, where I posted a link to some pictures of Planck being moved into the space simulating test chamber). Planck is projected to fly in February 2009. It will take Planck 3 months to get to L2 (the second Lagrange point of the Sun-Earth system) and to start observations from there. It\'ll take 6 months to make a full sky map. Planck will make at least 2 full sky maps, so that\'s 1 year of data. We will continue to analyze these data while Planck operates. How long Planck will be allowed to operate depends on how long the liquid helium lasts and how long ESA keeps funding Planck ground operations - that could be much longer than 1 year. The first data release will happen two years after the first year of data. So the results of the analysis will be released 39 months after launch. If the launch is in February of 2009, that would be May of 2012 - just in time for the Dark Energy Survey, and other dark energy missions to make use of the data. Data obtained after the first 12 months of observations will be released later, on a schedule to be determined. Let me know if this answers most of your questions. All the best, Ben Creator of Cosmology@Home |