Nearly 300 million years ago, a curious creature called Orobates pabsti walked the land. Animals had just begun pulling themselves out of the water and exploring the big, dry world, and here was the plant-eating tetrapod Orobates, making its way on four legs. Paleontologists know it did so because one particularly well-preserved fossil has, well, four legs. And luckily enough, scientists also discovered fossilized footprints, or trackways, to match.

The assumption has been that Orobates—a cousin of the amniote lineage, which today includes mammals and reptiles—and other early tetrapods hadn’t yet evolved an “advanced” gait, instead dragging themselves along more like salamanders. But today, in an epically multidisciplinary paper in Nature, researchers detail how they married paleontology, biomechanics, computer simulations, live animal demonstrations, and even an Orobates robot to determine that the ancient critter probably walked in a far more advanced way than was previously believed possible. And that has big implications for the understanding of how locomotion evolved on land, not to mention how scientists study the ways extinct animals of all types got around.

Taken alone, a fossil skeleton or fossil trackways aren’t enough to divine how an animal moved. “The footprints only show you what their feet are doing,” says biomechanist John Hutchinson at the Royal Veterinary College, coauthor on the new paper, “because there's so many degrees of freedom, or different ways a joint can move.” Humans, after all, share an anatomy but can manage lots of silly ways to walk with the same equipment.

Without the footprints, the researchers wouldn’t be able to tell with much confidence how the fossil skeleton moved. And without the skeleton, they wouldn’t be able to fully parse the footprints. But with both, they could calculate hundreds of possible gaits for Orobates, from the less advanced belly-dragging of a skink to the more advanced, higher posture of a crocodilian running on land.

They then used a computer simulation to toy with the parameters, such as how much the spine bends back and forth as the animal moves. “The simulation basically told us the forces on the animal, and gave us some estimates of how the mechanics of the animal may have worked overall,” says Hutchinson.

You can actually play with the parameters yourself with this fantastic interactive the team put together. Seriously, click on it and play along with me.

The dots in the three-dimensional graphs are possible gaits. Blue dots get high scores, and red dots get low scores. Double-click on one and below you’ll see that particular gait at work in simulation. You’ll notice that the red dots make for gaits that look a bit … ungainly. Dark blue dots, however, look like they’re a more reasonable way for a tetrapod to move. At bottom you’ll see videos of extant species like the iguana and caiman (a small crocodilian). It was observations of these species that helped the researchers determine what biomechanical factors are important, such as how much the spine bends.

A few other parameters: The sliders on the left let you monkey with things like power expenditure. Slide it to the right and you’ll notice the good blue dots disappear.

Here’s where things get tricky, though. Power efficiency is key to survival, of course, but it’s not the only constraint in biomechanics. “Not all animals optimize for energy, especially species that only use short bursts of locomotion,” says Humboldt University of Berlin evolutionary biologist John Nyakatura, lead author on the paper. “Obviously for species that travel long distances, energy efficiency is very important. But for other species it might be less important.”

Another factor is something called bone collision (which is a great name for a metal band). When you’re putting together a fossil skeleton, you don’t know how much cartilage surrounded the joints, because that stuff rotted away long ago. And different kinds of animals have different amounts of cartilage.

So that’s a big unknown with Orobates. In the interactive, you can dial the bone collision up and down with the slider at left. “You can allow bones to collide freely or just gently touch,” says Hutchinson. “Or you can dial it up to a level of 4 and allow no collisions, which is basically saying there must be a substantial space between the joints.” Notice how that changes the dots in the graph: The more collision you prevent, the fewer the potential gaits. “Whereas if you allow plenty of collision, there's just more possibilities for the limb to move.”

Now, the robot. The team designed OroBOT to closely match the anatomy of Orobates. It’s of course simplified from the pure biology, but it’s still quite complicated as robots go. Each limb is made up of five actuated joints (“actuators” being the fancy robotics term for motors), while the spine has eight actuated joints that allow it to bend back and forth. In the interactive, you can play with the amount of spine bending with a slider at left, and see how dramatically that changes the gait. Also, take a look at the video of the caiman in there to see just how much its own spine bends as it moves.

The beauty of the simulation is you can run all kinds of different gaits relatively quickly. But not so with a robot. “Running too many experiments with a physical platform is quite time-expensive, and you can also damage the platform,” says coauthor and roboticist Kamilo Melo of the Swiss Federal Institute of Technology Lausanne. Running simulations helped whittle down the list.

“In the end we have several gaits we know are quite good, and those are the kinds of gaits we actually test with the real robot,” adds Melo.

What they found was that given the skeletal anatomy and matching trackways, it was likely that Orobates walked fairly upright, more like a caiman than a salamander. “Previously it was assumed that only the amniotes evolved this advanced terrestrial locomotion,” says Nyakatura. “That it is already present in Orobates demonstrates that we have to assume that locomotor diversity to be present a bit earlier.” An important confirmation from the trackways: There are no markings that would correspond to a dragging tail.

So thanks to a heady blend of disparate disciplines, the researchers could essentially resurrect a long-dead species to determine how it may have walked. “Because they have brought digital modeling and robotics and all those things together to bear on this one animal, we can be pretty confident that they've come up with a reasonable suggestion for how it moved,” says paleontologist Stuart Sumida of California State University San Bernardino. He’s got unique insight here, by the way: He helped describe Orobates in the first place 15 years ago.

It’s key to also consider where Sumida and his colleagues found the fossil, in Germany. Around 300 million years ago, there was no running water at the dig site. And it’s running water that paleontologists typically rely on to preserve specimens in mud. “This was an utterly terrestrial environment that just happened to flood occasionally,” says Sumida. “And so you get a very unusual snapshot of what life was like not in the water.”

The upright gait of Orobates, then, would make sense. “This is a thing that walked around with great facility on the land, and this is exactly what the geology suggested,” says Sumida. What that means, he adds, is that Orobates and perhaps other early land-going species adapted to their environment faster than expected.

As the Bee Gees once said: “You can tell by the way I use my walk, I’m a comfortably terrestrial early tetrapod, no time to talk.”

For the last two days, a colossal, coursing stream of super-soaked subtropical air has been pummeling California with record-shattering amounts of moisture. On Wednesday, parts of northern California received more snow in a day than New England cities like Boston have seen all winter. On Thursday, Palm Springs got eight months’ worth of rain in as many hours. In San Diego and Los Angeles, brown water thick with desert dust flooded streets, triggered mudslides, and opened up sinkholes.

The 300-mile-wide, 1,000-mile-long atmospheric river that carried all this precipitation is starting to dry up, and the worst of the drench-fest is over. But all the new rainfall records highlight the fact that atmospheric rivers, while long a distinctive feature of weather in the American West, are intensifying in a climate-changed world.

If you haven’t heard the phrase “atmospheric river” before, don’t feel too bad. It’s a meteorological term of art that hasn’t yet cracked the pop cultural lexicon, unlike some of its flashier cousins—the polar vortex, bomb cyclone, and fire clouds, to name a few. Even the American Meteorological Society only added a definition for atmospheric river to its glossary last year.

The phenomenon itself isn’t a new one: For a long time it’s been pretty normal for California to receive most of its yearly precipitation in just a few big storms. Most of those multiday deluges are the product of atmospheric rivers, high-altitude streams of air that originate near the equator and are packed with water vapor. But it’s only been in the last decade or so that scientists have learned enough about this type of weather system to tell the difference between beneficial, run-of-the-mill storms that keep water reserves full and disastrous storms that overwhelm dams, levees, and reservoirs, like the one that pummeled California this week. As that balancing act gets even tougher for the region’s water managers, some scientists are making a push to put a number on those differences, in the same way you would a tornado or a hurricane.

“Your typical weather forecast displays a symbol—a sun for sunny days, a cloud for cloudy days. But the rain cloud symbol doesn’t really describe if it’s going to be a few showers or one of these more unusually substantial storms,” says F. Marty Ralph, a research meteorologist at UC San Diego’s Scripps Institution of Oceanography and director of its Center for Western Weather and Water Extremes. He’s been spearheading a multiyear effort to develop a five-category scale for diagnosing the strength of atmospheric rivers so that water managers, emergency personnel, and the general public can quickly get a grasp on just how destructive (or beneficial) the next storm will be.


The WIRED Guide to Climate Change

Ralph’s team unveiled their AR Cat scale earlier this month, in an article published in the Bulletin of the American Meteorological Society. The key feature it uses to assess the severity of such storms is the amount of water vapor flowing horizontally in the air. Called integrated vapor transport, or IVT, this number tells you how much fuel is feeding the system.

It’s not an easy number to calculate. To do it well requires taking multiple wind and water vapor measurements across miles of atmosphere. In the same way that terrestrial rivers flow at different rates at different depths, the water vapor molecules in atmospheric rivers travel at different speeds in the air column. Adding them all up vertically gives you the true measure of how strong a storm really is. Ralph’s team classifies storms as atmospheric rivers if they’re moving more than 250 kilograms of water per meter per second, ranging up from weak to moderate, strong, extreme, and exceptional.

But strength alone doesn’t predict how dangerous a storm will be. That’s why the AR Cat scale combines a storm’s IVT with how long it’s expected to linger. Storms that blow through in fewer than 24 hours get downgraded by one category, whereas storms that last longer than 48 hours immediately get bumped up a notch. So an “extreme” storm could be either a Cat 3 (balance of beneficial and hazardous), Cat 4 (mostly hazardous), or Cat 5 (hazardous) depending on what it does once it makes landfall.

That’s because the longer a storm hovers over land, funneling many Mississippi Rivers’ worth of moisture into its watersheds, the more strain it puts on those systems. The most destructive hurricanes in recent memory—Harvey in Texas and Florence in North Carolina—proved so catastrophic because they stalled over land, inundating those areas with multiple days of intense rainfall. But the current hurricane scales, which are based on wind speed, don’t take time into account. “With atmospheric rivers we had the opportunity to bake those numbers in from the very beginning,” says Ralph.

The AR Cat scale is, of course, only as reliable as the forecast model it’s built upon. And accurately predicting atmospheric rivers has long frustrated meteorological researchers. Models built on satellite data regularly flub the location of landfall by 250 miles, even when the storm is just three days out. Some of that data got a signal boost this week, as GOES-17, NOAA’s next-generation satellite, became operational over the western part of the United States.

GOES-17’s powerful new camera will fill in important gaps, especially over the Pacific Ocean, where coverage was previously sparse. “It was like watching a black-and-white television, and now we have full HD,” says Scott Rowe, a meteorologist with the National Weather Service’s Bay Area station. The new satellite also refreshes data at a much higher rate—taking a new image once every five minutes as opposed to every 10 or 15. In special circumstances, NWS forecasters can request to crank it up one notch further. On Thursday, when Rowe’s office was busy trying to predict where the California storm would go next, GOES-17 was snapping and sending images once every minute.

But according to Ralph, the new satellite’s not a complete fix for atmospheric river forecasting, because high clouds can mask what’s going on inside the storm. More fruitful are the regular reconnaissance missions Ralph has been coordinating for the past three years, sending US Air Force pilots in hurricane hunter airplanes to crisscross incoming streams of hot, wet air. At regular intervals they drop meteorological sensing devices known as dropsondes, which draw a more intimate portrait of each’s storm’s potential for precipitation.

It’s all part of a broader effort to help stewards of the region’s freshwater resources make better decisions about whether to keep water and risk a flooding event, or let it out ahead of the storm and risk it being a bust. The AR Cat scale, which Ralph says still needs some tuning to better articulate the risks and benefits of different kinds of storms, is aimed at making those decisions for reservoir operators as easy as one, two, three, four, five.

Knowing that a storm like the one that hit this week is a Cat 4 atmospheric river may not mean much to the average person just yet. Calibrating an arbitrary value to observed reality takes time and experience. But it’s a sign of the American West’s intensifying weather patterns that its residents need that language at all.

Cannabis is a hell of a drug. It can treat inflammation, pain, nausea, and anxiety, just to name a few ailments. But like any drug, cannabis comes with risks, chief among them something called cannabis use disorder, or CUD.

Studies show that an estimated 9 percent of cannabis users will develop a dependence on the drug. Think of CUD as a matter of the Three C’s, “which is loss of control over use, compulsivity of use, and harmful consequences of use,” says Itai Danovitch, chair of the department of psychiatry and behavioral neurosciences at Cedars-Sinai. A growing tolerance can also be a sign.

Compared to a drug like heroin, which can hook a quarter of its users, the risk of dependency with cannabis is much lower. The symptoms of withdrawal are also far less severe: irritability and depression with cannabis, compared to seizures and hallucinations with heroin. Plus, an overdose of cannabis can’t kill you.

But as medicine and society continue to embrace cannabis, we risk losing sight of the drug’s potential to do harm, especially for adolescents and their developing brains. Far more people use cannabis than heroin, meaning that the total number of users at risk of dependence is actually rather high. And studies are showing that the prevalence of CUD is on the rise—whether that’s a consequence of increased use due to legalization, a loss of stigma in seeking treatment, or some other factor isn’t yet clear. While cannabis has fabulous potential to improve human physical and mental health, understanding and then mitigating its dark side is an essential component.

Dependence is not the same as addiction, by the way. Dependence is a physical phenomenon, in which the body develops tolerance to a drug, and then goes into withdrawal if you suddenly discontinue use. Addiction is characterized by a loss of control; you can develop a dependence on drugs, for example steroids, without an accompanying addiction. You can also become addicted without developing a physical dependence—binge alcohol use disorder, for instance, is the condition in which alcohol use is harmful and out of control, but because the use isn't daily, significant physical dependence may not have developed. “An important similarity that all addictive substances tend to have is a propensity to reinforce their own use,” says Danovitch.

Cannabis, like alcohol or opioids, can lead to both physical dependency (and the accompanying withdrawal symptoms) and addiction. But the drug itself is only part of the equation. “The risk of addiction is really less about the drug and more about the person,” says Danovitch. If it was just about the drug, everyone would get hooked on cannabis. Factors like genetics and social exposure contribute to a person’s risk.

Another consideration is dosing. Cultivators have over the decades developed strains of ever higher THC content, while the compound in cannabis that offsets THC’s psychoactive effects, CBD, has been almost entirely bred out of most strains. Might the rise in the prevalence of CUD have something to do with this supercharging of cannabis?

A new study in the journal Drug and Alcohol Dependence found that for individuals whose first use of cannabis was with a high THC content (an average of around 12 percent THC) had more than four times the risk of developing the first symptom of CUD within a year. (Two caveats being: the participants in this study had a history of other substance abuse disorders, and this looked at the first symptom of CUD, not a full-tilt diagnosis.)

Figuring out such details improves the odds that we’ll be able to detect and treat cannabis use disorder. “Early intervention is important to address substance use before it progresses to a substance use disorder,” says Iowa State University psychologist Brooke Arterberry, coauthor of the study. But to pull that off, she says, we need to better understand when and why symptoms emerge.

Those answers will likely be especially important in intervening with adolescent users, whose brains continue to develop into their mid-20s. Studies suggest that heavy cannabis use among this demographic can lead to changes in the brain. Particularly concerning is the apparent link between cannabis and schizophrenia, the onset of which can happen in the early 20s.

It’s also important to keep in mind that in the grand scheme of drugs, cannabis is nowhere near as risky as opioids. But because of prohibition, scientists have been hindered in their ability to gather knowledge of how cannabis works on the human body, and how different doses affect different people (and potentially the development of CUD). Once acquired, those insights can inform how people should be using the drug. Groups like the National Organization for the Reform of Marijuana Laws, for example, want proper labeling to keep cannabis out of the hands of children. And we need clear communication of the potency of products that can be very powerful—a chocolate bar containing 100 milligrams of THC is not meant to be consumed all at once.

“The reasons we demand proper labeling is all because of an awareness that cannabis is a mood-altering substance,” says Paul Armentano, the organization’s deputy director. “It possesses some potential level of dependence and it carries potential risk. And we believe prohibition exacerbates those potential risks, while regulation potentially mitigates those risks.” Like other substance disorders, cannabis use disorder is treatable. And as scientists develop a better understanding of CUD, we can intervene with appropriate therapies.

Cannabis has big potential to treat a range of ills. And it’ll benefit users even more once we’ve characterized its risks more precisely.

The pistol shrimp, aka the snapping shrimp, is a peculiar contradiction. At just a few inches long, it wields one proportionally sized claw and another massive one that snaps with such force the resulting shockwave knocks its prey out cold. As the two bits of the claw come together, bubbles form and then rapidly collapse, shooting out a bullet of plasma that in turn produces a flash of light and temperatures of 8,000 degrees Fahrenheit. That’s right—an underwater creature that fits in the palm of your hand can, with a flick of its claw, weaponize a blast of insanely hot bubbles.

Now scientists are learning how to wield this formidable force themselves. Today in the journal Science Advances, researchers detail how they modeled a robotic claw after the pistol shrimp’s plasma gun to generate plasma of their own. That could find a range of underwater uses, once scientists have honed their version of one of evolution’s strangest inventions.

If all the pistol shrimp has is a plasma-blasting hammer, all the world indeed looks like a nail. It uses its claw to hunt, sure, but also to communicate with short snaps that measure an insane 210 decibels. (An actual pistol shot produces around 150 decibels.) Some species even use the plasma blasts to carve out bits of reef for shelter. The result is a seafloor that’s so noisy, it can actually interfere with sonar.

Texas A&M mechanical engineer David Staack figured that versatility might come in handy for humans as well. His team began by getting ahold of some live pistol shrimp. Like other arthropods, these animals periodically molt, shedding their exoskeletons as they grow. Those exoskeletons gave Staack a nice little cast of the claw, which he then scanned to create a detailed 3D model. This he sent off to Shapeways, the commercial 3D printing service, and got back a plastic version of the pistol shrimp’s plasma gun.

This allowed Staack to experiment with the unique structure of the limb. The top half of the claw, which the shrimp cocks back and locks, includes a “plunger,” which slams into a “socket” in the lower half of the claw. This creates a fast-moving stream of water that produces bubbles, also known in this situation as cavitation.

“That reminded us of a mousetrap,” he says. “So we actually did some experiments where we put some mousetraps underwater just to see how fast the little arm would spin as you triggered it. We took that mousetrap idea and applied it as a way to snap the claw shut.”

In Staack’s version of the claw, its top half rapidly spins on a spring-loaded rod, creating enough force to slam the plunger into the socket. This action generates a high-velocity stream of water that in turn produces a cavitation bubble, which initially is low pressure and relatively large. But then it begins to collapse.

“The water pushes in, and pushes in, and pushes in, and you get very high pressures and temperatures,” he adds. The temperatures are so high, in fact, that they create light-emitting plasma, which you can also see when the pistol shrimp snaps its own claw. “As it tries to push the water back out, it sends out a shockwave.” That is how the crustacean knocks out its prey in the wild.

In the lab, the researchers used high-speed cameras to observe the jet of water erupting from their claw. They also imaged the resulting shockwaves, capturing the flash of light as the plasma forms.

The pistol shrimp doesn’t have a monopoly on underwater plasma generation. People weld underwater using plasma, known as plasma arc welding, which produces intense heat. And researchers can also make plasma in water with lasers. The problem is, those means are inefficient. Using the claw to generate plasma is 10 times more efficient than those previously explored methods, according to Staack. It will, though, require more development to scale.

It may well become even more efficient, because the researchers need not faithfully follow the biology of the pistol shrimp. In fact, Staack realized they could trim down the size of the upper bit of the claw. In the actual pistol shrimp, it’s bulbous because it holds the muscles required to operate the limb. But this robotic version isn’t constrained by that biology.

“Replicating what the animal has done is the first step,” says Stanford University biologist Rachel Crane, who helped develop Ninjabot, a device that replicates the strike of the mantis shrimp, which similarly produces cavitation bubbles. “Then you can look at that and figure out, yeah, I don't need a giant muscle, and so I can cut this part out. Then you can engineer a better system.”

Researchers might even want to look back to nature for ways to tweak the system. Hundreds of species of pistol shrimp are snapping away out there in the sea, each with its own uniquely adapted claw. That and even individuals within a species vary in their morphology.

“The substrate for evolution, the only reason we have snapping shrimp of all these different varieties today, is because of individual variation,” says Duke biologist Sheila Patek, who studies the strike of the mantis shrimp. So while the researchers can make their own tweaks to their claw robot, they can also draw inspiration from the inherent diversity of pistol shrimp to play with claw morphologies other than the one they originally 3D printed.

That diversity may one day see a pistol-shrimp-inspired device used in a range of fields. One approach would be to use claw-generated plasmas to drill through rock, as the crustacean does out in the wild to make a home in a reef. Or you might use the system for water purification by breaking up water into its constituent parts, which forms a peroxide. “These peroxides can then attack organic contaminants in the water,” Staack says. “If you're thinking about cleaning municipal water or cleaning wastewater, efficiency becomes very important.”

And so the pistol shrimp finds a few more nails.

This weekend, the Perseid meteor shower will light up the moonless sky, the product of dust breaking off from the same Swift-Tuttle comet that sends its greetings to Earth every August. The richest display of meteors will appear between August 11 and 13. But as you settle down to watch those fragments burn up into nothingness as they hit the Earth’s atmosphere, something else will also be watching: NASA’s all-sky meteor camera network.

Every night, 17 video cameras scattered across the United States scan the skies for meteors. Each one sits in a sturdy white cylindrical tube for protection, sealed off by a clear dome lens that gives a clear view in all directions. “Every fireball lives only once,” says Bill Cook, the head of NASA’s Meteoroid Environment Office in Huntsville, Alabama. But this network of cameras makes them immortal.

As the Perseids reach their peak, a computer will scan the meteor cameras—which can see as far as 100 miles away—to detect motion. Stars and planets don’t move, so they disappear into the background. An airplane moves, but it has flashing lights. Bugs move, but not in a straight line. Satellites move, but slowly. Eventually, the computer disqualifies everything except for meteors.

Over the 10-year span of the program, the all-sky network has counted over 30,000 separate meteors. Two years ago, the tally got a boost when gravity from Jupiter concentrated the Perseid comet dust closer to Earth’s orbit than usual: The cameras tracked about two fireballs a minute. This weekend, we’ll only see about one per minute, which is pretty average for the Perseids. But because the shower is going to coincide with a new moon and an exceptionally dark sky, the cameras will be able to pick up meteors that otherwise would have been too faint to see.

Beyond the view of the cameras are many more meteors, too dim to be picked up. But the number of meteors the cameras do capture provides a clue: an uptick in the number of visible meteors is usually correlated with a rise in unseen ones, too. Cooke and others at his office relay this information to spacecraft operators, who use that information, however imprecise, to decide whether they want to steer away from areas with more space debris.

The US meteor camera system is just a part of a bigger picture; other countries have their own networks. They try to share their data, though it’s not always easy given that each network collects and processes the data according to their own methods. But they try. Every three years, scientists from the different systems meet at a conference to share advice, talk about new ideas, and write papers on their progress. In 2016, the conference was held in the Netherlands; next year, they’ll be headed to Slovakia.

For its network, NASA’s Meteoroid Environment Office actively sought out universities, science centers, and planetariums across the country to host the cameras. They had to be places with a strong internet connection that were nevertheless tucked away from the city glow that causes light pollution. Once, shortly after a local paper ran a story about one of the meteor cameras, someone used it for rifle target practice—so Cooke no longer likes to reveal the exact locations.

He did make an exception for a more protected camera hosted at the Pisgah Astronomical Research Institute, a research and educational outreach non-profit in the Appalachian mountains of western North Carolina. The camera is placed, like a star atop a Christmas tree, on the rooftop of the highest building on the highest hill at the campus. It’s above the maples, oaks, and poplars, for a view from horizon to horizon. NASA’s Meteoroid Environment Office came and installed the camera, and Lamar Owen, chief information officer at PARI, takes care of it and cleans the lens periodically.

Often when school groups come to visit PARI, Owen and other PARI educators will pull up NASA’s meteor website and show them images of the meteor streaks caught by the camera network. The students can also look up what part of the solar system the meteor came from, its trajectory, and its estimated size, calculations courtesy of geometry and celestial mechanics. “They think it’s the coolest thing ever,” says Owen.

But for this weekend’s Perseid showers, PARI advises that you ditch the cameras and look up at the night sky yourself. In fact, PARI is going to host a camping trip above the treeline. Unless it rains—and the local forecast has been almost uniformly cloudy recently—people will be able to pull their sleeping bags out of their tents and lie in the warm North Carolina night to watch the meteor shower. Stay up late enough, and you can join them: Just count light streaks rather than sheep.

Like a hit-and-run driver who races from the scene of a crash, the interstellar guest known as ’Oumuamua has bolted out of the solar system, leaving confusion in its wake. Early measurements seemed to indicate that it was an asteroid—a dry rock much like those found orbiting between Mars and Jupiter. Then by this past summer, astronomers largely came around to the conclusion that it was instead a comet—an icy body knocked out of the distant reaches of a far-off planetary system.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Now a new analysis has found inconsistencies in this conclusion, suggesting that ’Oumuamua may not be a comet after all. Whether it’s actually a comet or an asteroid, one thing is clear: ’Oumuamua is not quite like anything seen before.

The object was first spotted a year ago by scientists with the Pan-STARRS telescope in Hawaii. ’Oumuamua (a Hawaiian word meaning “scout”) appeared to be a rocky, elongated asteroid at first, a stubby cosmic cigar.

Other astronomers quickly joined in the hunt, measuring everything they could. (One team even trained radio telescopes on it to check whether it might be transmitting extraterrestrial broadcasts. It was not.) By last December, a team of astronomers published ’Oumuamua’s electromagnetic spectra, which can be used to probe what an object is made of. The researchers found that ices with organic material similar to those seen in comets in our solar system lurked just below ’Oumuamua’s surface; that ice could have survived a long interstellar journey.

They also looked at ’Oumuamua’s rotation. Many asteroids tend to spin around their long axis like an expertly thrown football. ’Oumuamua, by contrast, tumbled slightly like an errant pass by Charlie Brown.

A few months later, another collaboration found that ’Oumuamua wasn’t just being pulled by the sun’s gravity. Instead, it was being slightly accelerated by an unseen force, which they argued could only be attributed to comet “outgassing” acting like a thruster. With this additional information, the case appeared to be closed. “Interstellar asteroid is really a comet,” read the headline of a press release put out by the European Space Agency.

The explanation seemed to fit with what we know about our own solar system. In the distant reaches beyond Neptune, countless comets orbit our sun. Anytime one of these comets gets too close to a planet, it could be ejected out into the galaxy. In contrast, there are far fewer asteroids in the asteroid belt, and they orbit closer to the sun, where they’re harder to knock into interstellar space. “There are more comets, and it’s easier to fling them away from a planetary system,” said Ann-Marie Madigan, an astrophysicist at the University of Colorado, Boulder. “For the first interstellar traveler that we see in our solar system, for that to be an asteroid, would be shocking.”

Yet comets have tails. And ’Oumuamua, if it was indeed made out of icy rock and propelled by jets of gas as it passed by the sun, should have displayed a tail that would settle the question of its origin. Yet no tail was ever found.

Now in a new study that is currently under peer review, Roman Rafikov, an astrophysicist at the University of Cambridge, argues that the same forces that appeared to have accelerated ’Oumuamua — the same forces that should have also produced a tail — would have also affected its spin. In particular, the acceleration would have torqued ’Oumuamua to such a degree that it would have spun apart, breaking up into smaller pieces. If ’Oumuamua were a comet, he argues, it would not have survived.

“There’s very strong and unequivocal evidence on both sides,” said Rafikov. “If it’s an asteroid, then it’s really unusual, with exotic scenarios for its formation.” He proposed such a scenario earlier this year, whereby an ordinary star dies, forming a white dwarf, and in the process rips apart a planet and launches the shards clear across the galaxy. ’Oumuamua is one of those shards. “Basically, it’s a messenger from a dead star,” he said.

In part to help resolve the impasse, researchers have tried to identify the star system where ’Oumuamua originated by combing through the newly released data troves of the Gaia space telescope. Perhaps it came from a binary star system, or a system with a giant planet, either of which could have launched the object into interstellar space.

But of all the possible candidate star systems, none provided a match. ’Oumuamua’s trajectory was at least two light-years away from all the candidates anyway — too far for them to be its source. And if ’Oumuamua got launched hundreds of millions of years ago, all the local stars will have shifted quite a bit since then. “It’s unlikely you’d ever be able to track it back to a single individual parent system, which is a shame, but it’s just the way things are,” said Alan Jackson, an astronomer at the University of Toronto.

Ultimately the transient nature of the observations has frustrated astronomers’ ability to solve the mystery of our first interstellar guest. “We had only a few weeks, with almost no planning, to make the observations,” said Matthew Knight, an astronomer at the University of Maryland. “Everybody’s trying to wring out every last bit of information they can from what data we were able to collect as a community.” Had ’Oumuamua been spotted earlier, or had Hurricane Maria not taken Puerto Rico’s Arecibo Observatory out of action, astronomers would have more to go on.

And although ’Oumuamua was the first visitor from outside the solar system, astronomers will soon have more to puzzle over. Estimates are that the Large Synoptic Survey Telescope, scheduled for “first light” in 2021 in Chile, could find as many as one such object every year for a decade.

“What I hope ’Oumuamua brings home is that planetary systems grow and evolve. They create trillions of little planetesimals throughout the galaxy, and some of those will come and visit us every once and a while,” Bannister said. “Our planetesimals are no doubt visiting other stars.”

Original story reprinted with permission from Quanta Magazine, an editorially independent division of whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

This is sort of awesome. It's a concrete gravity battery. What? Yup. The idea is to even out the balance between power generation and power usage; like with any battery, this one allows you to store extra energy for use at a later time when demand is higher. Or maybe you could use solar power during the day to store energy in the battery to be used at night—you know, when the sun doesn't shine.

So, how does this work? There are really two physics parts to this concrete battery: gravitational potential energy and electric motors.

Gravitational Potential Energy

If you pick up a textbook from the floor and put it on a table, it will require about 10 joules of energy—a unit where 1 J = 1 kg*m22/s2. We can calculate the change in energy by lifting things using the work-energy principle. This says that work done on a system is equal to the change in energy of that system, and also that work depends on the force pushing on that system and the distance the force moves. Here I am using "system" to mean some thing or collection of things.

In the expression for work, Δr is the distance the force moves, and θ is the angle between the force and the direction it is moving.

If you want to lift a book with a mass (this includes most books you will find), then you will need to push up with a force equal in magnitude to the gravitational force. On the surface of the Earth, the gravitational force is the product of the mass (in kilograms) and the gravitational field with a value of approximately 9.8 newtons per kilogram.

So lifting a book up a distance h would have an angle between the force and displacement of 0° (remember that cosine of 0° = 0 1). The work done lifting an object of mass (m) and height (h) would then be:

This change in energy of the book is called gravitational potential energy. The more mass you lift, the greater the stored energy. The higher you lift the mass, the greater the potential energy. If you increase the gravitational field—oh wait. The only way to change the gravitational field is to change the size or mass of the Earth. Forget that part.

Motors and Generators

OK. You lift some mass and you can store energy. That's great, but how do you get the energy back into something useful? The answer is to use the same thing that lifted the mass—an electric motor. Yes. You can get energy out of a raised mass and an electric motor.

An electric motor isn't that complicated. It's really just a coil of wire and a magnet. As you run electric current through the wire, the current creates a magnetic field. This current-induced magnetic field interacts with the magnetic field produced by the other magnets, resulting in a rotating motor. If you want, you can probably build a simple electric motor with stuff you have around the house. Here are the instructions.

Let me show you an electric motor connected to a battery.

This electric motor lifts the mass to store the energy—but it also produces electricity. Suppose you take your electric motor and disconnect it from the battery that was running it. Now you rotate the coil of wire inside the motor. It just so happens that a loop of wire moving through a magnetic field creates an electric current. Yes, it's true.

If you take that exact same electric motor and turn it with something, it will generate electricity. You can see this with that same demonstration motor above as I turn it and power a small light.

An electric generator and an electric motor are the same thing. It just depends on how you use it. So, for this concrete gravity battery, the electrical energy goes into a motor to lift a mass a certain height. When you want to get the energy out of the battery, you use the same motor to lower the mass back down to the ground, causing the generator shaft to spin and create electricity. There's your gravity battery.

Energy Stored in a Concrete Gravity Battery

Now for an estimation. How much energy can you store in a stack of cement blocks? I will need to make some approximations first.

  • Each cement thingy is a 55 gallon drum filled with cement. This would be a volume of 0.208 m3. Oh, I am going to say that a 55 gallon drum has a height of 0.889 meters.
  • The density of cement (I know I said concrete earlier—to first approximation, these are the same) is 3150 kg/m3.
  • The mass of the drum (assuming it's all cement) is the volume multiplied by the density. This puts it at 655.2 kilograms.
  • The maximum stack height is 15 meters. I don't know if this is true. I'm just saying if I built one of these things, that's how tall my crane would be in my imaginary world.

Let's start simple. I have a cement drum on the ground and I put another one on top of it. I don't get any stored gravitational energy for that first cement thingy since it can't go any lower than it already is—but the one on top of that has a stored energy that depends on the gravitational field, the mass, and the height. For the height, I am going to use the length of a cement drum (0.889 meters). The height of the center of mass for this drum is important, but if I put it back on the ground, the center of mass only moves 0.889 meters.

This means that one drum stacked on another one has a potential energy of 5,708 joules. That might seem like a bunch of energy, but your smartphone battery can store about 20,000 joules (crazy but true). But wait! If I stack another drum? This second drum will have a stored energy that is twice that of the first one, since it will be twice as high. Higher cement things have more energy.

I'm going to just continue stacking stuff higher and higher until I get up to 15 meters. At that point, I will start back at the ground and stack again. Just for fun, here is a plot of stored energy as a function of the number of stacked drums.

Oh, you might want the code I wrote to create this graph. Boom. But that gives 2 million joules of stored energy with just 50 cement drums (assuming energy transfers are 100 percent efficient—which they aren't). That's not too bad. Of course the Tesla Powerwall can store about 50 million joules, so 50 drums might not be enough.

Still, this battery as some nice features. It's mechanically simple to build. If you stack some cement drums, they are going to stay like that for a long time, so you don't have to worry about battery drainage. Also, in the end all you need is a crane, a motor, and some cement.

This story originally appeared on The New Republic and is part of the Climate Desk collaboration.

Germany was supposed to be a model for solving global warming. In 2007, the country’s government announced that it would reduce its greenhouse gas emissions by 40 percent by the year 2020. This was the kind of bold, aggressive climate goal scientists said was needed in all developed countries. If Germany could do it, it would prove the target possible.

So far, Germany has reduced its greenhouse gas emissions by 27.7 percent—an astonishing achievement for a developed country with a highly developed manufacturing sector. But with a little more than a year left to go, despite dedicating $580 billion toward a low-carbon energy system, the country “is likely to fall short of its goals for reducing harmful carbon-dioxide emissions,” Bloomberg News reported on Wednesday. And the reason for that may come down not to any elaborate solar industry plans, but something much simpler: cars.

“At the time they set their goals, they were very ambitious,” Patricia Espinosa, the United Nations’ top climate change official, told Bloomberg. “What happened was that the industry—particularly the car industry—didn’t come along.”

Changing the way we power our homes and businesses is certainly important. But as Germany’s shortfall shows, the only way to achieve these necessary, aggressive emissions reductions to combat global warming is to overhaul the gas-powered automobile and the culture that surrounds it. The only question left is how to do it.

In 2010, a NASA study declared that automobiles were officially the largest net contributor of climate change pollution in the world. “Cars, buses, and trucks release pollutants and greenhouse gases that promote warming, while emitting few aerosols that counteract it,” the study read. “In contrast, the industrial and power sectors release many of the same gases—with a larger contribution to [warming]—but they also emit sulfates and other aerosols that cause cooling by reflecting light and altering clouds.”

In other words, the power generation sector may have emitted the most greenhouse gases in total. But it also released so many sulfates and cooling aerosols that the net impact was less than the automobile industry, according to NASA.

Since then, developed countries have cut back on those cooling aerosols for the purpose of countering regular air pollution, which has likely increased the net climate pollution of the power generation industry. But according to the Union of Concerned Scientists, “collectively, cars and trucks account for nearly one-fifth of all U.S. emissions,” while “in total, the US transportation sector—which includes cars, trucks, planes, trains, ships, and freight—produces nearly thirty percent of all US global warming emissions … .”

In fact, transportation is now the largest source of carbon dioxide emissions in the United States—and it has been for two years, according to an analysis from the Rhodium Group.

There’s a similar pattern happening in Germany. Last year, the country’s greenhouse gas emissions decreased as a whole, “largely thanks to the closure of coal-fired power plants,” according to Reuters. Meanwhile, the transportation industry’s emissions increased by 2.3 percent, “as car ownership expanded and the booming economy meant more heavy vehicles were on the road.” Germany’s transportation sector remains the nation’s second largest source of greenhouse gas emissions, but if these trends continue, it will soon become the first.

Clearly, the power generation industry is changing its ways. So why aren’t carmakers following suit?

To American eyes, Germany may look like a public transit paradise. But the country also has a flourishing car culture that began over a hundred years ago and has only grown since then.

Behind Japan and the United States, Germany is the third-largest automobile manufacturer in the world—home to BMW, Audi, Mercedes Benz, and Volkswagen. These brands, and the economic prosperity they’ve brought to the country, shape Germany’s cultural and political identities. “There is no other industry as important,” Arndt Ellinghorst, the chief of Global Automotive Research at Evercore, told CNN.

A similar phenomenon exists in the United States, where gas-guzzlers symbolize nearly every cliche point of American pride: affluence, capability for individual expression, and personal freedoms. Freedom, in particular, “is not a selling point to be easily dismissed,” Edward Humes wrote in The Atlantic in 2016. “This trusty conveyance, always there, always ready, on no schedule but its owner’s. Buses can’t do that. Trains can’t do that. Even Uber makes riders wait.”

It’s this cultural love of cars—and the political influence of the automotive industry—that has so far prevented the public pressure necessary to provoke widespread change in many developed nations. But say those barriers didn’t exist. How could developed countries tweak their automobile policies to solve climate change?

For Germany to meet emissions targets, “half of the people who now use their cars alone would have to switch to bicycles, public transport, or ride-sharing,” Heinrich Strößenreuther, a Berlin-based consultant for mobility strategies told YaleEnvironment360's Christian Schwägerl last fall. That would require drastic policies, like having local governments ban high-emitting cars in populated places like cities. (In fact, Germany’s car capital, Stuttgart, is considering it.) It would also require large-scale government investments in public transportation infrastructure: “A new transport system that connects bicycles, buses, trains, and shared cars, all controlled by digital platforms that allow users to move from A to B in the fastest and cheapest way—but without their own car,” Schwägerl said.

One could get away with more modest infrastructure investments if governments required carmakers to make their vehicle fleets more fuel-efficient, thereby burning less petroleum. The problem is that most automakers seek to meet those requirements by developing electric cars. If those cars are charged with electricity from a coal-fired power plant, they create “more emissions than a car that burns petrol,” energy storage expert Dénes Csala pointed out last year. “For such a switch to actually reduce net emissions, the electricity that powers those cars must be renewable.”

The most effective solution would be to combine these policies. Governments would require drastic improvements in fuel efficiency for gas-powered vehicles, while investing in renewable-powered electric car infrastructure. At the same time, cities would overhaul their public transportation systems, adding more bikes, trains, buses and ride-shares. Fewer people would own cars.

At one point, the U.S. was well on its way toward some of these changes. In 2012, President Barack Obama’s administration implemented regulations requiring automakers to nearly double the fuel economy of passenger vehicles by the year 2025. But the Trump administration announced a rollback of those regulations earlier this month. Their intention, they said, is to “Make Cars Great Again.”

The modern cars they’re seeking to preserve, and the way we use them, are far from great. Of course, there’s the climate impact—the trillions in expected economic damage from extreme weather and sea-level rise caused in part by our tailpipes. But 53,000 Americans also die prematurely from vehicle pollution each year, and accidents are among the leading causes of death in the United States. “If US roads were a war zone, they would be the most dangerous battlefield the American military has ever encountered,” Humes wrote. It’s getting more dangerous by the day.

Related Video


Al Gore Answers the Web's Most Searched Questions on Climate Change

Politician and activist Al Gore answers the Internet's most searched questions about climate change.

Is there any more tantalizing headline than “Scientists Discover a Cure for Cancer”? Some version of this fantastical claim has been dropped into the news cycle with the regularity of a super blood wolf moon for the better part of a century. In 1998, James Watson told The New York Times that a cancer cure would arrive by Y2K. This magazine hasn’t been immune either, running an “End of Cancer” headline a few years later. Each instance stirs up hope for patients and their families desperate to find a solution, no matter the risk or cost. And yet, here we are in 2019, with that constellation of complex, diverse diseases we lump together and call “cancer” for convenience's sake still killing one in eight men and one in 11 women, according to the World Health Organization’s latest stats.

You’d think creators and consumers of news would have learned their lesson by now. But the latest version of the fake cancer cure story is even more flagrantly flawed than usual. The public’s cancer cure–shaped amnesia, and media outlets’ willingness to exploit it for clicks, are as bottomless as ever. Hope, it would seem, trumps history.

What’s Happening

On Monday, the Jerusalem Post, a centrist Israeli newspaper, published an online story profiling a small company called Accelerated Evolution Biotechnologies that has been working on a potential anti-cancer drug cocktail since 2000. It was somewhat cautiously headlined “A Cure for Cancer? Israeli Scientists Think They Found One” and relied almost entirely on an interview with the company’s board chair, Dan Aridor, one of just three individuals listed on AEBi’s website. In it, Aridor made a series of sweeping claims, including this eye-popper: “We believe we will offer in a year’s time a complete cure for cancer.”

It was an especially brash move considering the company has not conducted a single trial in humans or published an ounce of data from its completed studies of petri dish cells and rodents in cages. Under normal drug development proceedings, a pharmaceutical startup would submit such preclinical work to peer review to support any claims and use it to drum up funding for clinical testing. AEBi’s PR move might be an attempt at a shortcut. In an interview on Tuesday, the company’s founder and CEO, Ilan Morad, told the Times of Israel that lack of cash flow is the reason AEBi has elected not to publish data.

The original Jerusalem Post article did not interview any outside experts in the oncology field. Nor did it inject any skepticism about the gap between speculative, preclinical work in controlled laboratory environments and a universal cure on a 12-month timeline. Anyone who knows anything about oncology will tell you that a vast number of promising treatments fail human testing. One recent estimate put success rates for cancer drugs getting to market at a dismal 3.4 percent.

What People Are Saying

About 12 hours after the Jerusalem Post tweeted out a link to its story, figures from the far right began to amplify its optimistic headline. Pro-Trump twitter troll Jacob Wohl posted it, followed shortly by conservative political pundit Glenn Beck, who added his own self-aggrandizing touch. “As we have hoped and prayed, and I spoke about happening by 2030: A TOTAL cure for cancer.”

By Tuesday morning, Fox News had published its own report. The story did add some caveats, including a strongly worded comment emailed from a New York oncology expert, who called AEBi’s claim likely to be “yet another in a long line of spurious, irresponsible, and ultimately cruel false promises for cancer patients.” But Fox’s grabby headline retained a nearly identical formula to the original Jerusalem Post story and was copied by similar reports that cropped up on local TV news spots from Philadelphia to Melbourne, Australia.

While many major news outlets ignored the story, the New York Post and Forbes both published their own glowing versions, based largely on the Jerusalem Post’s reporting. But within 24 hours, both sites had come out with new, decidedly less rosy stories, in which they (gasp!) interviewed cancer experts. Forbes actually published two. One, by the original story’s author, was entitled “Experts Decry Israeli Team’s Claims That They Have Found the Cure for Cancer” and another, headlined even more explicitly: “An Israeli Company Claims That They Will Have a Cure for Cancer in a Year. Don’t Believe Them.”

Such course correction is not unusual, nor nefarious, in the fast-moving world of online journalism. But, as scholars of the internet attest, misinformation spreads faster online than attempts to claw it back. While outrage may be the fuel that feeds the virality of most fake news stories, when it comes to news about our health, people tend to be motivated by a more upbeat impulse. “Positivity looms larger in deciding both what to read and what to share,” wrote Hyun Suk Kim, a communications researcher at Ohio State University, in one analysis of how health news stories get shared through social networks.

So the “Cancer Cured!” piece is going to travel farther, faster, than the “Cancer Still Sucks” story. Case in point: When Forbes tweeted out its original article, it received 47 replies, 821 retweets, and 1,635 likes. The one that went out a day later, publicizing a 180-degree reversal in tone, has so far received a mere four replies, 30 retweets, and 61 likes.

Why It Matters

Social media makes it easier than ever to be a noncritical consumer of information. The constant scroll-scroll-scroll is practically designed to encourage lazy thinking. At the same time, people are hungry for a life preserver of good news amid the toxic content spewing from platforms like Twitter and Facebook. When every day online feels like a battle across party, sex, race, class, and even generational lines, cancer is a unifying enemy. A story about the end of cancer could be an olive branch to a sick friend or a relative across the social divide. Or it might just allow you to believe, for one blissful moment, that your body’s cells aren’t already on an unstoppable mutational march toward your demise.

But all the armchair philosophizing in the world can’t change the ugly truth of the persistent cancer-cure meme: Peddling false hope is immoral.

When Melissa Moore was tinkering around with RNA in the early 90s, the young biochemist had to painstakingly construct the genetic molecules by micropipette, just a few building blocks at a time. Inside the MIT lab of Nobel laureate Phil Sharp, it could take days to make just a few drops of RNA, which ferries a cell’s genetic source code to its protein-making machinery. She didn’t imagine that nearly three decades later she’d leave academia to work for a company that cranks out the stuff 20 liters at a time.

Moore heads up RNA research at Moderna Therapeutics. Worth an estimated $7 billion, it’s one of the most valuable private healthcare companies in the world, according to CB Insights. The Boston area-based biotech firm is one of a handful of businesses developing technologies to turn people’s own cells into drug manufacturing plants using messenger RNA, or mRNA. These strings of instructions could convince a patient’s body to make things like cancer-killing chemicals, heart-healing proteins, or virus-hunting antibodies. “Once you understand how to get these medicines where they need to be you can just change the sequence and make a new medicine very quickly,” says Moore. “It’s a complete sea change in our abilities.”

Maybe so, but Moderna’s pipeline remains in the early stages eight years after its founding. Operating in stealth for the first two years, the company earned an early reputation for secrecy. The editors of Nature Biotechnology at one point chastised the company—along with other biotechs, including the embattled Theranos—for its lack of publishing.

It’s only in the last year and a half, as Moderna has put several drug candidates in clinical trials, that it has begun to open up publicly, finally publishing papers with some details about the technology it’s developing. And as those trials expand—right now it has 10, with 11 more on the way—so too does Moderna. Last week the company opened a new 200,000-square-foot, $110 million manufacturing facility that will stock its trials and pre-clinical research teams with all the mRNA they require, at least for now.

“It’s counterintuitive to a startup,” said Moderna chief of staff and the new site lead, Stephen Harbin, acknowledging that the company is still years away from producing commercial products. “But it’s entirely intuitive to this startup.”

Earlier this month, when England’s hope for a World Cup trophy was still very much alive, the cowboy-booted Brit showed WIRED around the new Moderna site where employees paused in passing to exclaim things like “You going all the way?!” Harbin explained how gowned, gloved, hair-netted scientists would move through the building’s five fluorescently lit clinical clean rooms making Moderna’s first official GMP—for good manufacturing practices, the guidelines required by drug regulators—batch of mRNA when it opened on July 17.

In the first room, large stainless steel machines turn a digital sequence of genetic building blocks called nucleotides into ring-shaped DNA plasmids. In the second, enzymes convert that DNA into strands of mRNA. In room three, the mRNA gets coated in lipid nanoparticles to help it enter cells.

The last and most critical room is deep in the middle of the building, in a sealed-off aseptic block. To go there, employees have to don double layers of gowns and gloves, and move slowly so they don’t stir up any microbes that might have slipped past air filters and sanitizing scrub-downs. Preventing contamination here is of utmost importance. It’s where the mRNA gets deposited into the vials that will take them to their final destination.

Behind the clean rooms, in a part of the building Harbin says we’re not allowed to visit, workers are still finishing Moderna’s “ballrooms,” where the company plans to install a handful of refrigerator-sized, custom-designed robots for producing personalized cancer vaccines later this year. In addition to the programs that Moderna has for infectious diseases, cardiovascular disorders, and rare disease, perhaps nothing has attracted attention like the idea of designing one-off cancer-fighting drugs. A decade ago, the economics would have made it unthinkable. In terms of human labor it would cost the same to make a medicine for one patient as for a million patients, according to Moderna president Steven Hoge . But automation and advanced sequencing technologies are changing that.

“We’re going to be able to make medicines that address diseases in different people in very different ways as a result of mostly removing humans from the process,“ Hoge told WIRED earlier this year. “It’s not something that is like ‘oh, this is the right color for you,’ it’s actually, “no, we invented this color for you.’”

Like others attempting this approach, Moderna starts the process of making each individualized treatment with a pair of genetic profiles taken from a cancer patient. One comes from a gob of tumor tissue, one from a vial of their blood. By comparing the two, an algorithm scours for the mutations that caused that particular cancer. Another algorithm produces a list of 20 protein targets it predicts will teach the patient’s immune system to attack the tumor, based on those mutations. And yet another designs the string of nucleotides that Moderna’s unique automated machines will assemble into an mRNA medicine. Human workers monitor the process from a workstation and run quality control checks, but machines do the bulk of the work.

Moderna began clinical trials for solid tumors last fall in partnership with Merck; the first patient received her individualized treatment just before Thanksgiving. The vaccines are being tested in combination with Merck’s immunotherapy drug, Keytruda, which works by impairing the cancer’s tricks for eluding the immune system.

It’s a collaborative strategy at least some of Moderna’s rivals are also employing, in the hopes of being first to the market. Germany-based BioNTech has already begun Phase 1/2 trials for its individualized cancer vaccine in patients with multiple tumors with its partner, Genentech. It got its first good manufacturing practices authorization back in 2011. CureVac, also based in Germany, established the world’s first GMP manufacturing facility for mRNA back in 2006. It’s currently in the process of building its third and fourth plants, which will increase the company’s capacity thirty-fold by 2020. It has three cancer-fighting vaccine programs currently in clinical trials.

Some industry analysts say the lack of progress in mRNA-based cancer vaccines should cause investors concern. Dirk Haussecker, a biotech consultant based in Germany, is already turning his attention to newer technologies like Crispr gene editing, which he thinks will render most applications of mRNA, including personal cancer treatments, obsolete.

Nils Walter, a director at the University of Michigan’s Center for RNA Biomedicine, isn’t so pessimistic. He thinks the time is finally right for RNA-based therapeutics and that companies like Moderna, CureVac, and BioNTech will likely be the vanguard. But he cautions that there’s still a lot left to learn about the biology of these potential cures. “If you want to go beyond just vaccines you have to start to worry about what that mRNA is doing, because it can escape elsewhere into the body,” he says. “You inject into the muscle and it magically appears in the bloodstream.”

But he says adding Melissa Moore, who left her well-respected post at the University of Massachusetts Medical School's RNA Therapeutics Institute for Moderna, will undoubtedly help the company address those questions. “With her scientific caliber, maybe they’ll be able to see potential bottlenecks, be honest about them, and overcome them quickly, he says.” After all, she has developed many of the field's widely used RNA techniques. In a meeting with Moderna’s process innovation group, Moore realized they were using a technique she invented as a post-doc 30 years ago. She dredged up her old lab notebooks to show them.

As Moderna moves into this new chapter, she might also help them break out of their cycle of secrecy. Moore says her team is about to publish a paper showing they can engineer an off-switch into mRNAs, so they only express proteins in the cells Moderna wants them to, like, say, cancer cells. And they’ve got more research coming about designing mRNAs to last longer in the body, which will be important for treating genetic diseases that require taking the medicine over a lifetime. The proof will be in the publishing.