What's so great about getting in a swimming pool? The answer is that it can make you feel like a superhero. Even in the shallow end, you can easily lift another person—even someone larger than you. You become the hero of the pool area (until you get out of the water). Even just floating in the pool, you feel like you are defying gravity.

OK, maybe this is just how I act in the water. Maybe you just swim laps or splash in the water. That's fine too, I guess (but try the superhero thing some time).

The reason you are so strong in the water is because of the buoyancy force. This is a force that every object in water or even in the air has pushing up on it. OK, you rarely notice this buoyancy force in the air, but it's there (just small) To help you see it, here's a quick experiment to show how the buoyancy force works in water.

Let's say you have a glass of still water sitting on a table. It's important that the water is still. Now imagine a small section of water inside that water. Maybe it's a cube of water that is 1 cm on a side. Here's a diagram that might help.

I put a dotted line around the special water-in-the-water so you can see it. I mean, it's still just water (even though it's special). But what happens to this special water in the rest of the water? This is not a trick question. The answer is that that water just sits there. It's in water, it doesn't move. You could say it floats in water. Really, it has to float. Otherwise it would accelerate down and then the water wouldn't be still. But this is still water.

If the water is just sitting there with zero acceleration, the total force on it must be zero—that's the nature of forces. This total force is a sum of two forces. The first force has to be the gravitational force pulling down. There is a gravitational force because the special water has mass. Objects with mass have a gravitational interaction with the Earth. The magnitude of this gravitational force is equal to the mass (in kilograms) multiplied by the local gravitational field (g = 9.8 N/kg).

Now suppose I replace this cube of water with some other object—let's use a metal block of the exact same dimensions. Like this:

Since the metal has the exact same shape and size as the water cube, the rest of the water in the cup should interact with the metal block in exactly the same way. The net buoyancy force on this block would be equal to the net buoyancy force on which the special water floated. That means that if I calculate the gravitational force on the water that the block displaces, that would be equal to the buoyancy force. I can write this as the following expression:

If you are wondering what that heck that p-looking symbol is, it's the greek letter ρ (pronounced rho) and it's the variable for density. Chemists often use "d" for density—but that's just because they aren't as cool as physicists. Oh, and if you put something in water it has a density of around 1000 kilograms per cubic meter. The V in the above formula is the volume of the water displaced and g is the gravitational field.

OK, now for an experiment. What happens if you partially submerge an object in water? Is there a way to measure this buoyancy force in a fun way? Yes, there is. Here's what I'm going to do. I have an aluminum cylinder. I can partially put it in water and suspend it from a scale.

In this case, there are three forces acting on the aluminum cylinder: the gravitational force pulling down, the spring scale pulling up, and finally the buoyancy force from the part of the cylinder that is underwater. What happens when the cylinder is lowered even more into the water? The scale reading decreases and the buoyancy force increases. Since the volume of water displaced by the cylinder will increase with the depth of the cylinder in the water, I can get the following expression for the total force.

This looks bad, but really it's not so bad. Let me go over the key parts.

  • The Fs term is just the force the scale pulls up on the mass. This is something that I will read off the scale.
  • Again, the ρ is the density of water and g is the gravitational field.
  • The h is the distance of the cylinder that is under the water. If I know the cross-sectional area (A) of the cylinder, then hA is the volume of the water displaced.
  • The mg is just the weight of the cylinder.

Notice that as I lower the the cylinder in the water, the depth changes and the scale reading changes—everything else is constant. Since the force from the scale and the height have a linear relationship, I should be able to plot Fs vs. h and get a straight line. That's exactly what I'm going to do. Here is what I get.

Boom. That looks pretty linear to me (as it should be). But wait! There's more. When I fit a linear equation to the data, I get a slope of -5.1335 Newtons per meter and a vertical intercept of 1.088 Newtons. Both of these values mean something related to the experiment. With a tiny bit of algebra (just a tiny bit), I can modify the force equation above to look like this:

In this more familiar form (remember I am plotting Fs vs. h), it's easier to see that the slope should be ρgA and the intercept should be the weight (mg). I can check these two things. If I measure the diameter of cylinder, I can get a calculated cross-sectional area of 0.00049 m2 for an expected slope of 4.81 N/m. That's pretty close. For the intercept, I get an expected value of 1.079 N. Again close.

See. Graphs are our friends. It is a great way to show a linear relationship between two things. I try to tell my students this all the time, but they don't believe me.

Friends, have you thought about your insurance lately?

[Reader clicks close tab.]

Dammit! Wait, no, look: Climate change makes natural catastrophes worse, in both intensity and frequency, and insurance might be a significant way to pay for recovery. International aid can be unreliable; government money really is just taxpayer money. Corporations and nations have, for at least a decade, had access to quick infusions of post-disaster cash; now they might be common for regular people too—and if you’re in a disaster zone, a cash bonanza could be the difference between staying and rebuilding or having to just leave, permanently.

Typical insurance, the kind you probably have on your car or home, helps with this, but it is slooooow. It pays out only after you make a claim and get a valuation of the damages—and then you still have to wait for the check. That’s not much help if you’re wading through floodwater.

Insurers have figured out a way to speed that up—by restructuring the system. Forget about claims and adjustment; with these new kinds of policies, all it takes to get the financial ball rolling is the occurrence of a trigger, a previously agreed-upon event: an earthquake of sufficient size, say, or a hurricane with winds of a given speed. It’s called “parametric insurance,” and if one of those hazard parameters gets met, every policy holder downrange of the trigger gets an automatic payment of a set amount. Pow.

Governments and corporations are into it. The investment world originated the idea, probably because large organizations that incur complex damages appreciate a fast, predictable payout. And catastrophes can make deploying claims adjusters unsafe or outright impossible—the Nepal earthquake of 2015, for example, killed 9,000 people and incurred losses in the range from $6 billion to $10 billion. Only a fraction of that was insured, and even getting help to the region was a challenge. A big, all-at-once infusion of cash would have helped.

Since 2007, countries in the Caribbean and beyond have together operated the Caribbean Catastrophe Risk Insurance Facility to deal with the problems developing countries typically face after hurricanes, earthquakes, and floods. The African Union has one, as does Hong Kong in case of typhoons. “If you have good enough data and good enough sensing technologies, such as the seismometer network in California or the hurricane-hardened WeatherFlow anemometer stations on the East Coast, you can get that data and very quickly work out whether someone should be getting paid,” says Samuel Jay Gibson, of the Capital and Resilience Solutions Group at the catastrophe risk modeling firm RMS. “This allows post-event, initial injections of cash for immediate disaster recovery.”

Until recently, individual consumers didn’t have access to parametric insurance in the US. That’s changing: In October 2018, a company called Jumpstart started offering earthquake coverage to Californians. The trigger is a quake that reaches 30 centimeters per second of peak ground velocity, a measure the US Geological Survey uses to create “shake maps” of intensity.

So if a quake hits and you’re in the “red zone” of 30 cm/sec PGV, you get an automated text message asking if you want your money. Confirm—you have to confirm for regulatory reasons—and you get a direct deposit of $10,000. “Even if there’s no damage to your stuff, your life is going to be messed up in an earthquake that big,” says Kate Stillwell, Jumpstart’s founder and CEO, a structural engineer who spent a decade building computer models of earthquake risk. That money can pay for a hotel after evacuation, for child care if schools close, for a quick car repair, or to make up for lost work days because the roads are too damaged to drive and transit is suspended.

Well, actuary

Insurance and risk are primarily about math. The basic principle of just about all insurance is that enough people pay premiums over time to cover the big payouts after an event. In California, everyone knows a big earthquake is coming. But only 10 percent of homeowners have quake insurance; the same goes for commercial buildings. Even if you survive a disaster, even if most of your stuff survives, you still face consequences. And those hit poor people—less likely to have ready cash or stable support networks—the hardest. Watching New Orleans after Hurricane Katrina, Stillwell realized that those social vulnerabilities can be as much a problem as the actual disaster. “As structural engineers, we are not doing our job if the other pieces of the resilience puzzle are not in place, and one of those pieces is getting enough money into the system,” she says. “What good are safe buildings if nobody stays to live in them?”

In business school, Stillwell learned about “catastrophe bonds,” a financial tool pegged to disasters. Parametric insurance fits that category. She also realized that new technologies—more accurate hazard models, automated financial services, and a robust text-messaging system—could actually sustain a parametric consumer business. “Fundamentally the motive was to get more money into the system,” according to Stillwell. That let her incorporate as a public benefit corporation, a so-called B-corp, to do post-disaster stimulus. Originally the company was going to pay $30,000; she says California regulators told her that amount was large enough that people might think Jumpstart was meant to cover all their losses, rather than serve as “gap coverage” in a disaster’s aftermath. So she lowered the payout. (The premiums, ranging from $11 to $33 a month depending on zip code, cover the business and the cost of paying for the collateral—a pool of money at the insurer Lloyds of London that makes sure Jumpstart can always pay out.)

The key to making parametric insurance work is dealing with “basis risk,” the match (or mismatch) between a trigger and the damage it can actually cause. If you model damages for a magnitude 7 quake and set that as a trigger, but then suffer damages at magnitude 4, you’ve blown it. “So how you design that trigger could vary depending on which of the different types of coverage you’re looking for,” Gibson says. “It starts with understanding the problem space and then moving backwards to an optimal parameter.”

Natural hazards are particularly amenable to this, because there are so many sensors monitoring them. New York’s MTA uses a tidal gauge; satellite images combined with topography might eventually work for flood levels. Wildfires have a distinct burn area regardless of whether your house, specifically, gets destroyed. “What you’re trying to do is say, what level of damage am I going to have, given this trigger?” says Matt Junge, head of property solutions, US and Canada, at Swiss Re, a global reinsurer and disaster information clearinghouse.

The obstacles, then, are “education”—telling people this thing is on sale—and regulation. Outside California, state regulators are still chewing on how and whether to give parametric policies the green light for consumers. They’re new, and bureaucracies are justifiably cautious. But if Jumpstart is successful, Stillwell says it will expand to other states and other perils next year. “You can imagine: summertime, East Coast.” We can indeed imagine. Every season, someone new is thinking about their insurance.

More than 5 million people across the world started out life as a sperm and an egg in a petri dish. Yet for every in vitro fertilization success story, there have been at least as many failures. Today the procedure works about 40 percent of the time for women under 35; it gets worse the older you get. But researchers and companies are hoping that a set of more experimental methods will improve those odds by hacking biology itself.

Last summer, a 32-year-old Greek woman, who’d previously undergone two operations for endometriosis and four unsuccessful cycles of IVF, once again returned to the surgical table to have a thin needle threaded through her vagina to retrieve eggs from her ovaries. But unlike in her earlier IVF attempts, this time fertility specialists did not inseminate them with her partner’s sperm right away. Instead the doctors at the Institute of Life, in Athens, took a donor’s eggs, stripped them of their nuclei, and inserted the patient’s DNA in their place. Then the modified eggs were inseminated. The resulting embryos—a combination of genetic material from three people—were transferred to the Greek woman’s womb, leading to her first successful pregnancy.

She is now 28 weeks along with a baby boy, according to a Spanish company called Embryotools, which announced the pregnancy earlier this month. The fertility tech firm is collaborating with the Institute of Life to conduct the first known human trial of the procedure, called mitochondrial replacement therapy (MRT), for treating infertility. Their pilot study in Greece will eventually enroll 25 women under the age of 40 who’ve failed to conceive using conventional methods of IVF. It’s the largest test yet of the controversial new method of procreation.

Unlike conventional IVF, which is essentially a numbers game to get a viable embryo, MRT promises to actually improve the quality of older eggs, which can take on damage as they age. If it proves to be safe and effective—a big if—it could radically change women’s prospects of having children later in life.

Fertility doctors first started messing around with the idea for MRT in the late ’90s in clinics in New York and New Jersey on a hunch that some people struggle to get pregnant because of defects in the jelly-like cytoplasm of their eggs. By 2001, the technique, often dubbed “three-person IVF,” produced a reported 30 births. Shortly after, the US Food and Drug Administration stepped in with warning letters, abruptly bringing such work in the American infertility scene to a standstill.

From the FDA’s point of view, embryos created using MRT represent an abrupt departure from nature’s normal course. The agency claims that they should be regulated like a drug or gene therapy, because these new, untested genetic relationships pose a considerable risk. While the amount of donor DNA makes up just a tiny fraction of the resulting embryo—about 0.2 percent—the potential health impacts of having any amount of donor DNA are still poorly understood. In the US, that ignorance stems in part from the fact that scientists are prevented from using federal funds for research on embryos that could result in their harm or destruction.

Critics argue that it’s unethical to expose unborn children to these unknowns when infertile parents have other options for starting a family, such as egg donation and adoption. “The potential risks of this procedure for the babies are significant but unclear; its potential value in treating infertility is inconclusive,” says Stanford bioethicist Hank Greely, who wrote about MRT in his book The End of Sex. “For now, I wouldn’t do it.”

Where the case for MRT is more compelling (to ethicists and regulators) is in preventing mitochondrial diseases. Mitochondria, the structures that float in the cytoplasm providing power to human cells, have their own DNA, separate from the DNA coiled inside chromosomes. Mutations in mitochondrial DNA can lead to debilitating, often fatal conditions that affect about one in 6,500 people worldwide. Because babies inherit all their mitochondria from the female egg—sperm lose theirs during the act of reproduction—preventing mitochondrial disease could be as simple as swapping out one egg’s mitochondria for another’s. Studies in monkeys and human cell lines have mostly supported the idea, though in some worrying cases the donor mitochondria have been shown to revert back to the mutated form.

In February, British authorities granted doctors at Newcastle University the go-ahead to begin a study assessing how well MRT could help two women affected by mitochondrial diseases conceive healthy children. The UK is the first country to legalize the use of MRT, but only for women with heritable mitochondrial disease, and only under strict oversight. Australia is also considering legislation to approve the procedure in limited cases.

In the US, such trials are effectively banned. But that hasn’t stopped the most determined MRT defenders from trying it in places with looser laws.

In 2016, a New York-based infertility specialist named John Zhang reported using MRT to facilitate the birth of a healthy baby boy at a clinic in Mexico. Valery Zukin, a fertility doctor in Kiev, Ukraine, says he has used MRT in seven successful births since May 2017, with three more on the way. Zukin says he received approval for a five-year research program from Ukrainian health authorities, but he has not registered the trial with the European clinical trial database, and he is charging patients for the procedure: $8,000 for Ukrainians and $15,000 for foreigners. In December 2017 he formed a company with Zhang to make it easier for interested Americans to access the procedure in Ukraine.

Still, the lack of a rigorous trial leaves questions about how safe and effective the procedure really is. Those gaps in knowledge are what Nuno Costa-Borges, scientific director and cofounder of Embryotools, hopes to address in his Greek study. “The only thing missing from the debate is what happens to the babies,” Costa-Borges says . “There’s no other way of testing that than to transfer the embryos. But we need to do it in a strict, well-controlled study that is scientifically rigorous.”

Some critics may not be swayed by the study’s design, which won’t have a conventional control group. Embryotools is calling it a “pilot trial,” instead. The reason, Costa-Borges says, is that the women they have been recruiting have already failed conventional IVF many times before and may not have many chances left. Those unsuccessful IVF cycles will serve as the control group.

“Comparing to historical controls is better than nothing, but it’s not ideal,” says Paula Amato, an OB-GYN at Oregon Health and Science University, where much of the modern mitochondrial replacement therapy work has been pioneered by Shoukhrat Mitalipov. She says it’s always possible that some of these women might have gotten pregnant on their next round of IVF, even without MRT. But she applauds the Embryotools team for doing something to generate meaningful data. “In the fertility field, innovations have often been adopted prior to having evidence that it works, and that’s a problem.”

As in many countries, the laws in Spain and Greece aren’t exactly clear regarding the legality of mitochondrial replacement therapy. The procedure is neither explicitly prohibited nor approved. Costa-Borges says his team decided to conduct their trial in Greece because that’s where their long-time clinical partner, the Institute of Life, has its facilities. So far, the country has been playing along.

His team received approval in Greece at the end of 2016 but only began recruiting human patients last year after completing another battery of in vitro safety tests. “We are not rushing,” Costa-Borges says. “The technology has a lot of potential, but we want to move cautiously. We don’t want to create any false expectations.” So far, he says, his team has prepared the eggs from eight additional patients. Now that the first pregnancy has crossed into the third trimester, Costa-Borges says, his team is considering moving forward with the eight others.

In addition to showing that the early stages of MRT can be conducted safely, the technique’s proponents also need to assuage critics with longer-term data on how the children develop. To that end, Embryotools is working with a pediatric hospital in Greece to monitor the health of all the babies born from its study until they are 18 years old. The company is also exploring creating a registry of every child born using MRT technologies to help track their health outcomes over their lifespan as compared to naturally conceived babies. Such a database was never established for conventional IVF births, for legal and ethical reasons.

But given the raised stakes of such genetic alterations, the idea might gain traction this time around. Just as IVF redefined the biological boundaries of baby-making four decades ago, MRT is poised to write the next chapter in human reproductive history. Even at the current pace of MRT births, pretty soon it’s going to be easy to lose count.

More Great WIRED Stories

  • We need a radical new way to understand screen use
  • These chickens lay designer eggs for Big Pharma
  • Fyre Festival docs dissect attendees'—and your—FOMO
  • Weedmaps’ grip on the high-flying California pot market
  • The invisible reality of motherhood on Instagram
  • ? Looking for the latest gadgets? Check out our picks, gift guides, and best deals all year round
  • ? Want more? Sign up for our daily newsletter and never miss our latest and greatest stories

Related Video


Engineering Sustainable Biofuels

How do you feed the world, make biofuel, and remain sustainable? In this World Economic Forum discussion, MIT chemical engineer Kristala Prather says that microbes might provide an answer.

On Monday, the buzz of machinery echoed through SpaceX’s Hawthorne-based manufacturing facility as SpaceX president Gwynne Shotwell introduced a quartet of astronauts, each decked out in NASA blues. Behind them, tucked inside a clean room, was their ticket to low-Earth orbit: SpaceX’s Crew Dragon, still naked without its stark white outer shell.

So far, every SpaceX Dragon capsule has only carried cargo to and from the International Space Station. But that will change when NASA’s Commercial Crew program launches its astronauts—the first to leave from US soil since 2011. The first Crew Dragon is set to take off in November as part of an uncrewed flight test, and if all goes according to plan, a crew of two astronauts—Doug Hurley and Bob Behnken—will launch to the ISS for a two-week stay in April 2019. The next team, Victor Glover and Mike Hopkins, will take off some time after that.

Now that the first two crews have been announced, Behnken and Hurley—both veteran shuttle pilots who have been working on the project since 2015—will begin training on the vehicle itself. Or a least a simulacrum of it: Part of that training will happen in a two-seater cockpit simulator, located just above the clean room.

SpaceX’s new cockpit design will take more onboarding than you think. NASA’s astronauts are used to the space shuttle’s vast array of more than 1,000 buttons and switches, but the crew will control the Dragon with the help of just three touch screen control panels and two rows of buttons. Touch screens in space, you say? Yes, really: The astronauts’ new spacesuits, a one-piece design that’s more wetsuit than pumpkin suit, also comes with conductive leather gloves that will allow them to control the screens.

The displays will both provide the crew with orbital flight tracking and give them control over the craft. Though the vehicle is designed to be autonomous, crews will have the ability to manually fly the Dragon and fire thrusters for minor course corrections. After astronauts select commands on the touch screen, the analog buttons, shielded by a clear covering, will execute them. The buttons are also used to handle emergencies: One button under the far left panel extinguishes a fire, while a large pull-and-twist handle, located under the center screen and marked “EJECT,” arms the vehicle’s launch escape system.

Learning the control panel is just the beginning. While Dragon will have both autonomous systems and a ground crew as backup, its first crews will still have to be prepared for any scenario. That’s where SpaceX’s full-scale simulator comes into play. The replica located upstairs in the astronaut training area at the Hawthorne facility comes outfitted with seats, control panels, flight software, and life-support systems, allowing SpaceX crew trainers to put the astronauts through increasingly complex failures—who knows, maybe even their own version of the Kobayashi Maru.

Outside the cavernous rocket-building warehouse, SpaceX is working on another hallmark of its strategy: reusing more of its rocket’s components. In particular, the payload fairing, which is also known as the nose cone. Tethered to a dock in the Port of Los Angeles, and nestled among the many freighters and fishing vessels resides one of the more recent additions to SpaceX’s fleet: a boat named Mr. Steven. SpaceX aims to use the vessel to recover the fairings, which historically have been a one-use component, as they navigate themselves back to Earth after separating from the rocket.

Each fairing—a $6 million piece of hardware—accounts for one tenth of the price of the entire Falcon 9 rocket, and SpaceX can save a bundle if it can scoop up the fairing before it lands in the ocean. Here’s where the aerospace company’s fleet of recovery vessels comes into play. Essentially a mobile catcher’s mitt, Mr. Steven is outfitted with a yellow net that spans nearly 40,000 square feet. So far, Mr. Steven’s recovery attempts have been unsuccessful, but on Monday, SpaceX conducted tests that will hopefully allow engineers better understand the properties of Mr. Steven’s net.

Visible in the net was one of the fairing’s two halves, attached to a crane that repeatedly lifted and lowered it to help engineers understand how the net behaves while loaded down. SpaceX wouldn’t want to catch a fairing, only to have it crash through the net and onto the ship’s deck.

Mr. Steven’s next trip out to sea will be in late September as SpaceX prepares to launch the Argentinian Earth-observing satellite SAO-COM-1A. There’s a lot riding on this launch: It will mark the company’s first attempted landing on the west coast; all of its previous landings out of Vandenberg have touched down on one of the company’s drone ships. If SpaceX manages to recapture both the rocket booster and the fairing, it’ll save an estimated $37 million.

The Unknowability of the Next Global Epidemic

March 20, 2019 | Story | No Comments

Disease X

n. A dire contagion requiring immediate attention—but which we don’t yet know about.

In 2013 a virus jumped from an animal to a child in a remote Guinean village. Three years later, more than 11,000 people in six countries were dead. Devastating—and Ebola was a well-studied disease. What may strike next, the World Health Organization fears, is something no doctor has ever heard of, let alone knows how to treat. It’s come to be known as Disease X.

Since René Descartes adopted the letter x to denote a variable in his 1637 treatise on geometry, it has suggested unknowability: the mysterious nature of x-rays, the uncertain values of Generation X, the conspiratorial fantasies of The X-Files. It’s also been used as code for experimental—in the names, for instance, of fighter jets and submarines. That’s an apt association: Disease X may leapfrog from animals to humans like Ebola, but it could instead be engineered in a lab by some rogue state.

Still, far from asking us to resign ourselves to an unpredictable future horror, Disease X is a warning to prepare for the worst possible scenario as best we can. It calls for nimble response teams (a critical failure in the Ebola epidemic) and broad-spectrum solutions. The WHO has solicited ideas for “platform technologies,” like plug-and-play systems that can create new vaccines in months instead of years. As Descartes showed us in mathematics, only by identifying an unknown can we begin to find an answer.

In a field at the edge of the University of Minnesota’s St. Paul campus, half a dozen students and lab technicians glance up at the darkening afternoon skies. The threatening rain storm might bring relief from the 90-degree August heat, but it won’t help harvest all this wheat. Moving between the short rows, they cut out about 100 spiky heads, put them in a plastic container, and bring them back to a growling Vogel thresher parked at the edge of the plot. From there, they bag and label the grains before loading them in a truck to take back to James Anderson’s lab for analysis.

Inside those bags, the long-time wheat breeder is hoping to find wheat seeds free of a chalky white fungus, Fusarium head blight, that produces a poisonous toxin. He’s looking for new genes that could make wheat resistant to one of the most devastating plant diseases in the world. Anderson runs the university’s wheat breeding program, one of dozens in the US dedicated to improving the crop through generations of traditional breeding, and increasingly, with the aid of genetic technologies. Today his toolbox got a lot bigger.

In a Science report published Thursday, an international team of more than 200 researchers presents the first high-quality, complete sequence of the bread wheat genome. Like a physical map of the monstrous genome—wheat has five times more DNA than you do—the fully annotated sequence provides the location of over 107,000 genes and more than 4 million genetic markers across the plant’s 21 chromosomes. For a staple crop that feeds a third of the world’s population it’s a milestone that may be on par with the day its domestication began 9,000 years ago.

“Having breeders take the information we’ve provided to develop varieties that are more adapted to local areas is really, we think, the foundation of feeding our population in the future,” says Kellye Eversole, the executive director of the International Wheat Genome Sequencing Consortium, the public-private research team that worked for more than a decade to complete the sequence. Founded in 2005, Eversole says the IWGSC’s goal was to help improve new crop traits for a changing world.

Breeding programs like Anderson’s are constantly on the hunt for wheat strains that will meet the needs of farmers facing tough economic and environmental realities. A 2011 study in Science showed that rising temperatures are already causing declines in wheat production. More recent research in Nature suggests that trend is only going to get worse, with a 5 percent decline in wheat yields for every one degree Fahrenheit uptick.

So what kinds of traits make for better wheat? The ability to grow in hotter climates is a plus. And disease resistance is nice. But farmers have other priorities, too. “When selecting a variety they’re looking at yield first, lodging resistance second, and protein content third,” says Anderson. Lodging is when a wheat stalk gets bent over, collapsing under its own weight. Stalk strength is one way to counteract that. But breeders have to be careful to balance those traits with others, like nutritional composition. “We’re trying to build disease resistance into a total package that’s going to be attractive to a grower,” says Anderson.

Building that total package is still a slow, labor-intensive process. Breeders painstakingly pluck out pollen-producing parts from each tiny “spikelet” on a wheat stem so they can fertilize each one with pollen from plants with other desirable traits; repeating that process thousands of times each year. Then they screen and select for traits they want, which requires testing how well thousands of individual plants perform over the growing season. In Anderson’s lab, which focuses on Fusarium head blight resistance, that means spraying test field plots with fungal spores and seeing which ones don’t die. It’s only in the last three years that he’s used gene sequencing technologies to help produce more survivors.

The plot his crew was harvesting on Tuesday was what’s called a “training population”: 500 individual plants selected to represent a larger group of 2,500 that have had their genomes sequenced. By combining the genetic data with their field performance data, Anderson can make better predictions about which plants will have the best disease resistance, and which genetic backgrounds will confer that trait.

To line up those gene sequences, graduate students in Anderson’s lab used an earlier, rougher version of the reference genome, which the IWGSC published in 2014. “That’s made it much easier to identify good DNA markers that serve as tags for genes that we’re interested in tracking,” says Anderson. The sequence published today, which covers 94 percent of the genome, as opposed to 61 percent, will be even more useful for tying specific traits to specific genes and starting to make tweaks.

The question then becomes, how exactly to make those tweaks? Unlike corn, soybean, and canola, no one has yet commercialized genetically modified wheat. Anderson at one time was working with Monsanto on a Roundup-Ready wheat, but the multinational dropped out of the partnership in 2004. “It was mostly cultural, and trade-related,” says Anderson. “The world wasn’t ready for it at the time, and it’s probably still not ready for it.” Some of the largest historical importers of US wheat—Europe and Japan—have been the most hostile to genetically modified foods.

“Bread is the heart of the meal,” says Dan Voytas, a fellow University of Minnesota plant scientist, and the co-founder of gene-editing agricultural company, Calyxt. “It’s kind of sacred, in the public perception.” Calyxt is among a bumper crop of start-ups racing to bring the first gene-edited products to market; it’s growing a new high-fiber wheat in its sealed greenhouses, located just a few miles away from Anderson’s test plots. Newer technologies like Crispr, zinc fingers, and TALENs don’t yet face the same cultural resistance, or as much red tape as first generation GMOs. A USDA ruling in March of this year declared the agency would mostly decline to regulate gene edited plants, provided they didn’t introduce any wildly distant genetic material.

The new reference genome promises to accelerate the development of new genetically engineered products, in addition to the crops coming out of traditional breeding programs. But as with most genomic work, the new wheat map has plenty of room for more detail. So far it’s mostly been manually annotated—meaning researchers went through by hand and added all the information they know about where genes are and how they function. Technology could help speed that up: The IWGSC is looking for support to fund a deep learning annotation pilot project. “We’re in a totally different time where we can try some of these high-risk, high-reward approaches that might give us 80 or 90 percent of the information and then we can go in and fill the gaps,” says Eversole. With its whopping 14.5 billion base pairs, the wheat genome might actually require some non-human intelligence to help it reach its full potential.

There was a time many years ago when cars guzzled gas like beer, teenagers raced them on Friday nights, and Detroit automakers boasted about their vehicles' ever-increasing horsepower and speed. Since then, cars have become safer, cleaner and more efficient, mostly as a result of tougher standards from Washington.

A new Trump administration proposal might bring that half a century of vehicular progress screeching to a halt, some experts say, by shrugging its collective shoulders at the growing danger of climate change and the fuel-efficiency standards designed to combat it. The White House wants to freeze future auto emissions standards and ban California from making its own tougher rules for carbon emissions from vehicles.

First, a look at the numbers. A little-noticed report issued by the National Highway Transportation Safety Administration (NHTSA) predicts that the Earth’s temperature will rise a whopping 7° Fahrenheit (4° Celsius) by 2100, assuming that little or nothing is done to reverse emissions of carbon dioxide and other greenhouse gases. The current Paris climate accords call for nations to pledge to keep warming below 3.6° F (2° C) by century’s end.

The Trump administration’s climate change scenario would likely entail catastrophic melting of ice sheets in Greenland and Antarctica, causing rising sea levels that would flood low-lying coastal areas from Maine to Texas—not to mention warmer oceans that could spawn ever-stronger hurricanes alongside pockets of inland drought, and a collapse of agriculture in many areas.

The NHTSA report came up with these doomsday numbers to argue that automobile and truck tailpipe emissions after 2020 will have such a small global impact on overall greenhouse gases that it's not worth tightening the screws on Detroit automakers. “What they are saying is we are going to hell anyhow, what difference does it make if we go a little faster,” says David Pettit, a senior attorney at the Natural Resources Defense Council. “That’s their theory of how they are dealing with greenhouse gas emissions.”

Pettit and others say the NHTSA report and the Trump administration’s proposal to roll back future tailpipe emissions standards would allow Detroit to build bigger, thirstier cars than would have been permitted under President Obama-era rules. Pettit notes that he has gone from driving a 7-miles-per-gallon Chrysler in the late 1960s to a Chevy Bolt today, largely as the result of stricter federal standards that require automakers to sell clean cars alongside their SUVs and trucks.

In addition to throwing up its hands at climate change, the Trump administration also argues that continuing to increase fuel economy requirements will make the overall vehicle fleet less safe, because people will continue to drive older cars longer than they otherwise would. The argument is that the higher price tags on more fuel-efficient cars will deter consumers from buying new vehicles equipped with more advanced technology that also improves safety. But Giorgio Rizzoni, director of the Center for Automotive Research at Ohio State University, says the administration has it backwards. His study of the past 40 years concludes that safety and fuel efficiency have grown at the same time.

If the Trump administration rules are passed, American carbuyers might end up seeing vehicles with less advanced technology on the dealer lot than overseas buyers, says Austin Brown, executive director of the UC Davis Policy Institute for Energy, Environment and the Economy. “The cars would look the same on the outside, but they would burn more gasoline, cost more money and create more emissions,” he says. That's because US cars with weaker fuel standards won't be sold on worldwide markets, he adds.

The Trump administration held public meetings on the proposal this week in Fresno, California; Dearborn, Michigan; and Pittsburgh. The deadline for written comments is October 23.

A Clever and Simple Robot Hand

March 20, 2019 | Story | No Comments

If you want to survive the robot apocalypse—the nerd joke goes—just close the door. For all that they’re great at (precision, speed, consistency), robots still suck at manipulating door handles, among other basic tasks. Part of the problem is that they have to navigate a world built for humans, designed for hands like ours. And those are among the most complex mechanical structures in nature.

Relief for the machines, though, is in sight. Researchers at the University of Pisa and the Italian Institute of Technology have developed a stunningly simple, yet stunningly capable robotic hand, known as the SoftHand 2, that operates with just two motors. Compare that to the Shadow Dexterous Hand, which is hypnotizingly skillful, but also has 20 motors. The SoftHand promises to help robots get a grip at a fraction of the price.

Like other robot hands out there, the SoftHand uses “tendons,” aka cables, to tug on the fingers. But it’s arranged in a fundamentally different way. Instead of a bunch of cables running to individual fingers, it uses just one cable that snakes through pulleys in each finger. Which gives it a bit less dexterity, but also cuts down on cost and power usage. And that’s just fine: There’s no such thing as a one-technique-fits-all robotic manipulator. More complex robot hands will undoubtedly have their place in certain use cases, as might SoftHand.

To create this hand, the researchers originally built a simpler SoftHand with just one motor. “The idea is that when you turn the motor, the length of the tendon shrinks and in this way you force the hand to close,” says roboticist Cosimo Della Santina, who helped develop the system.

Let out the tendon and the fingers once again unfurl into a flat palm, thanks to elasticity in the joints. It works great if you want to, say, grip a ball. But because the fingers move more or less in unison, fine manipulation isn’t possible.

By adding one more motor, SoftHand 2 ups the dexterity significantly. Take a look at the images above. Each end of the tendon—which still snakes through all the fingers—is attached to one of two motors in the wrist. If you move the motors in the same direction, the tendon shortens, and you get the gestures in the top row: A, B, C, and D. Same principle as the original SoftHand.

But run the motors in opposite directions, and something more complex unravels in E, F, G, and H. In this case, one motor lets out the tendon, while the other reels it in. “If you have a tendon moving through a lot of pulleys, the tension of the tendon is not constant,” says Della Santina.

If one motor is pulling, the tension on that end of the tendon will be higher. If the other is letting out the tendon, the tension on that end will be lower. By exploiting tension this way, the SoftHand requires far fewer cables than your typical robotic hand, yet can still get all those fingers a-wiggling.

Take a look at the GIF above and you can see the difference an extra motor makes. That’s one motor in the hand on the left, and two in the hand on the right. The former sort of brute-forces it, collapsing all its fingers around the ball. The latter, though, can more deliberately pinch the ball, thanks to the differences in tension of the tendon. Same principle below with the bank note.

Given that it’s working with just two motors, SoftHand can pull off an impressive array of maneuvers. It can extend an index finger to unlatch a toolbox or slide a piece of paper off a table. It can even unscrew a jar. All of it on the (relative) cheap. Because lots of motors = lots of money.

“For robots to learn and do cool stuff, we need cheap, reliable, and complex systems,” says Carnegie Mellon University roboticist Lerrel Pinto, who works on robot manipulation. “I think their hand strikes this balance,” he adds, but the real test is whether others researchers find uses for it. “Can it be used to autonomously learn? Is it reliable and robust over thousands of grasps? These questions remain unanswered.”

So SoftHand has promise, but more complicated robotic manipulators like the Shadow Dexterous Hand still have lots to offer. The SoftHand might be good for stereotyped behaviors, like unscrewing jars, while the Shadow and its many actuators might adapt better to more intricate tasks.

Fist bumps, though? Leave that to old Softie.

Police departments around the country are getting increasingly comfortable using DNA from non-criminal databases in the pursuit of criminal cases. On Tuesday, investigators in North and South Carolina announced that a public genealogy website had helped them identify two bodies found decades ago on opposite sides of the state line as a mother and son; the boy’s father, who is currently serving time on an unrelated charge, has reportedly confessed to the crime. It was just the latest in a string of nearly two dozen cold cases cracked open by the technique—called genetic genealogy—in the past nine months.

This powerful new method for tracking potential suspects through forests of family trees has been made possible, in part, by the booming popularity of consumer DNA tests. The two largest testing providers, Ancestry and 23andMe, have policies in place to prevent law enforcement agencies from directly accessing the genetic data of their millions of customers. Yet both companies make it possible for customers to download a digital copy of their DNA and upload the file to public databases where it becomes available to police. Searches conducted on these open genetic troves aren’t currently regulated by any laws.

But that might not be true for much longer, at least in Maryland. Last month, the state’s House of Delegates introduced a bill that would ban police officers from using any DNA database to look for people who might be biologically related to a strand of offending, unknown DNA left behind at a crime scene. If it passes, Maryland investigators would no longer have access to the technique first made famous for its role in cracking the Golden State Killer case.

Maryland has been a leader in genetic privacy since 2008, when the state banned the practice of so-called “familial searches.” This method involves comparing crime scene DNA with genetic registries of convicted felons and arrestees, in an attempt to identify not only suspects but their relatives. Privacy advocates argue that this practice turns family members into “genetic informants,” a violation of the Fourth Amendment. A handful of other states, including California, have also reined in the practice. But only Maryland and the District of Columbia outlawed familial search outright.

“Everyday law enforcement should never trump the Constitution,” says delegate Charles Sydnor III, an attorney and two-term Democrat from Baltimore. Sydnor is sponsoring the current bill, which would expand Maryland’s protections of its residents’ DNA even further. “If the state doesn’t want law enforcement searching databases full of its criminals, why would it allow the same kind of search conducted on citizens who haven’t committed any crimes?”

But opponents of House Bill 30 dispute that the two methods share anything in common. At a hearing for the proposed law in late January, Chevy Chase police chief John Fitzgerald, speaking on behalf of Maryland chiefs and sheriffs, called the bill a “mistake” that would tie investigators’ hands. “A search is a government intrusion into a person’s reasonable expectation of privacy,” he said. Because public databases house DNA from people who have freely consented to its use, as opposed to being compelled by police, there can be no expectation of privacy, said Fitzgerald. “Therefore, there is no search.”

Here is where it might help to have a better idea of how genetic genealogy works. Some police departments enlist the help of skilled sleuths like Barbara Rae-Venter, who worked on both the Golden State Killer and recent North and South Carolina murder cases. But most hire a Virginia-based company called Parabon. Until last spring, Parabon was best known for its work turning unknown DNA into forensic sketches. In May it began recruiting people skilled in the art of family-tree-building to form a unit devoted to offering genetic genealogy services to law enforcement.

The method involves extracting DNA from a crime scene sample and creating a digital file made up of a few hundred thousand letters of genetic code. That file is then uploaded to GEDMatch, a public warehouse of more than a million voluntarily uploaded DNA files from hobby genealogists trying to find a birth parent or long-lost relative. GEDMatch’s algorithms hunt through the database, looking for any shared segments of DNA and adding them up. The more DNA shared between the crime scene sample and any matches, the closer the relationship.

Parabon’s genealogists take that list of names and, using public records like the US Census, birth and death certificates, newspaper clippings, and social media, build out family networks that can include many thousands of individuals. They then narrow down the list to a smaller cohort of likely suspects, which they pass on to their law enforcement clients. In both genetic genealogy and familial search, these lists of relatives generated by shared DNA are treated as leads, for police to investigate further using conventional detective work.

It’s easy to understand why law enforcement agencies in Maryland would want to halt the bill in its tracks. Last year, Parabon helped police in Montgomery and Anne Arundel Counties arrest suspects in two cold cases—a home invasion that turned deadly and a serial rapist who targeted elderly victims. The company declined to disclose how many open cases it is currently pursuing with the state, saying only that it has working relationships with a number of police departments across Maryland. The cost of each case varies, based on the number of hours Parabon’s genealogists put into the search, but on average it runs about $5,000.

Parabon’s CEO, Steven Armentrout, who also spoke out against the bill at the hearing last month, suggests that forensic genetic genealogy is no different than police knocking on doors. “A lead is a lead, whether it’s generated by a phone tip or security camera footage or a consenting individual in a public DNA database.” When police canvas a neighborhood after a crime has taken place, some people will decline to answer, while others will speak freely. Some might speak so freely that they implicate one of their neighbors. “How is this any different?” Armentrout asks.

The difference, say privacy advocates, is that genetic genealogy has the potential to ensnare many more innocent people in a net of police suspicion based solely on their unalterable biology. Today, more than 60 percent of Americans of European ancestry can be identified using open genetic genealogy databases, regardless of whether they’ve ever consented to a DNA test themselves. Experts estimate it will be only a few years before the same will hold true for everyone residing in the US.

“There isn’t anything resembling consent, because the scope of information you can glean from these types of genetic databases is so extensive,” says Erin Murphy, a law professor at New York University. Using just a criminal database, a DNA search would merely add up stutters of junk DNA, much like identifying the whorls on a pair of fingerprints. DNA in databases like GEDMatch, however, can tell someone what color eyes you have, or if you have a higher-than-average risk of certain kinds of cancer. The same properties that make that kind of data much more powerful for producing more distant and more accurate kin-relations make it much more sensitive in the hands of police.

Until very recently, GEDMatch was police investigators’ only source of consumer DNA data. Companies like 23andMe and Ancestry have policies to rebuff requests by law enforcement. But last week Buzzfeed revealed that another large testing firm, Family Tree DNA, has been working with the FBI since last fall to test crime scene samples. The arrangement marked the first time a commercial company has voluntarily cooperated with authorities. The news came as a shock to Family Tree DNA customers, who were not notified that the company’s terms of service had changed.

“That’s why a bill like this is so important,” says Murphy. Because the fine print can change at any time, she argues that demanding more transparency from companies or more vigilance from consumers is insufficient. She also points out that the proposed legislation should actually encourage Marylanders to engage in recreational genomics, because they can worry less about the prying eyes of law enforcement.

But despite Maryland’s history, HB 30 faces an uphill battle. In part, that’s because the ban this time around has been introduced as a stand-alone measure. The 2008 prohibition was folded into a larger package that expanded DNA collection from just-convicted felons to anyone arrested on suspicion of a violent crime. The other hurdle is that in 2008 there were not yet any well-publicized familial search success stories. With resolutions to several high-profile cold cases, forensic genetic genealogy has already captured the public imagination.

Representative Sydnor, who comes from a law enforcement family—his father is a probation officer, and he has one uncle who is a homicide detective and another who is an FBI agent—says he wants to catch criminals as much as anyone. He just wants to do it the right way. “DNA is not a fingerprint,” he says. “A fingerprint ends with you. DNA extends beyond you to your past, present, and future. Before we decide if this is the route we really want to take, citizens and policymakers have to have a frank and honest conversation about what we’re really signing up for.” Over the next few months, that’s exactly what Marylanders will do.

If you want to watch sunrise from the national park at the top of Mount Haleakala, the volcano that makes up around 75 percent of the island of Maui, you have to make a reservation. Being at 10,023 feet, the summit provides a spectacular—and very popular, ticket-controlled—view.

Just about a mile down the road from the visitors center sits “Science City,” where civilian and military telescopes curl around the road, their domes bubbling up toward the sky. Like the park’s visitors, they’re looking out beyond Earth’s atmosphere—toward the sun, satellites, asteroids, or distant galaxies. And one of them, called the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS, just released the biggest digital astro data set ever, amounting to 1.6 petabytes, the equivalent of around 500,000 HD movies.

From its start in 2010, Pan-STARRS has been watching the 75 percent of the sky it can see from its perch and recording cosmic states and changes on its 1.4 billion-pixel camera. It even discovered the strange 'Oumuamua, the interstellar object that a Harvard astronomer has suggested could be an alien spaceship. Now, as of late January, anyone can access all of those observations, which contain phenomena astronomers don’t yet know about and that—hey, who knows—you could beat them to discovering.

Big surveys like this one, which watch swaths of sky agnostically rather than homing in on specific stuff, represent a big chunk of modern astronomy. They are an efficient, pseudo-egalitarian way to collect data, uncover the unexpected, and allow for discovery long after the lens cap closes. With better computing power, astronomers can see the universe not just as it was and is but also as it's changing, by comparing, say, how a given part of the sky looks on Tuesday to how it looks on Wednesday. Pan-STARRS's latest data dump, in particular, gives everyone access to the in-process cosmos, opening up the "time domain" to all earthlings with a good internet connection.

Pan-STARRS, like all projects, was once just an idea. It started around the turn of this century, when astronomers Nick Kaiser, John Tonry, and Gerry Luppino at Hawaii’s Institute for Astronomy suggested that relatively “modest” telescopes—hooked to huge cameras—were the best way to image large skyfields.

Today, that idea has morphed into Pan-STARRS, a many-pixeled instrument attached to a 1.8-meter telescope (big optical telescopes may measure around 10 meters). It takes multiple images of each part of the sky to show how it’s changing. Over the course of four years, Pan-STARRS imaged the heavens above 12 times, using five different filters. These pictures may show supernovae flaring up and dimming back down, active galaxies whose centers glare as their black holes digest material, and strange bursts from cataclysmic events. “When you visit the same piece of sky again and again, you can recognize, ‘Oh, this galaxy has a new star in it that was not there when we were there a year or three months ago,” says Rick White, an astronomer at the Space Telescope Science Institute, which hosts Pan-STARRS’s archive. In this way, Pan-STARRS is a forerunner of the massive Large Synoptic Survey Telescope, or LSST, which will snap 800 panoramic images every evening with a 3.2-billion-pixel camera, capturing the whole sky twice a week.

Plus, by comparing bright dots that move between images, astronomers can uncover closer-by objects, like rocks whose path might sweep uncomfortably close to Earth.

That latter part is interesting not just to scientists but also to the military. “It’s considered a defense function to find asteroids that might cause us to go extinct,” White says. That's (at least part of) why the Air Force, which also operates a satellite-tracking system on Haleakala, pushed $60 million into Pan-STARRS’s development. NASA, the state of Hawaii, a consortium of scientists, and some private donations ponied up the rest.

But when the telescope first got to work, its operations hit some snags. Its initial images were about half as sharp as they should have been, because the system that adjusted the telescope’s mirror to make up for distortions wasn’t working right.

Also, the Air Force redacted parts of the sky. It used software called Magic to detect streaks of light that might be satellites (including the US government's own). Magic masked those streaks, essentially placing a dead-pixel black bar across that section of sky, to “prevent the determination of any orbital element of the artificial satellite before the images left the [Institute for Astronomy] servers,” according to a recent paper by the Pan-STARRS group. The article says the Air Force dropped the requirement in December 2011. The magic was gone, and the scientists reprocessed the original raw data, removing the black boxes.

The first tranche of data, from the world’s most substantial digital sky survey, came in December 2016. It was full of stars, galaxies, space rocks, and strangeness. The telescope and its associated scientists have already found an eponymous comet, crafted a 3D model of the Milky Way’s dust, unearthed way-old active galaxies, and spotted everyone’s favorite probably-not-an-alien-spaceship, ’Oumuamua.

The real deal, though, entered the world late last month, when astronomers publicly released and put online all the individual snapshots, including auto-generated catalogs of some 800 million objects. With that data set, astronomers and regular people everywhere (once they've read a fair number of help-me files) can check out a patch of sky and see how it evolved as time marched on. The curious can do more of the “time domain” science Pan-STARRS was made for: catching explosions, watching rocks, and squinting at unexplained bursts.

Pan-STARRS might never have gotten its observations online if NASA hadn't seen its own future in the observatory's massive data pileup. That 1.6-petabyte archive is now housed at the Space Telescope Science Institute in Maryland, in a repository called the Mikulski Archive for Space Telescopes. The institute is also the home of bytes from Hubble, Kepler, GALEX, and 15 other missions, mostly belonging to NASA. “At the beginning they didn’t have any commitment to release the data publicly,” White says. “It’s such a large quantity they didn’t think they could manage to do it.” The institute, though, welcomed this outsider data in part so it could learn how to deal with such huge quantities.

The hope is that Pan-STARRS’s freely available data will make a big contribution to astronomy. Just look at the discoveries people publish using Hubble data, White says. “The majority of papers being published are from archival data, by scientists that have no connection to the original observations,” he says. That, he believes, will hold true for Pan-STARRS too.

But surveys are beautiful not just because they can be shared online. They’re also A+ because their observations aren’t narrow. In much of astronomy, scientists look at specific objects in specific ways at specific times. Maybe they zoom in on the magnetic field of pulsar J1745–2900, or the hydrogen gas in the farthest reaches of the Milky Way’s Perseus arm, or that one alien spaceship rock. Those observations are perfect for that individual astronomer to learn about that field, arm, or ship—but they’re not as great for anything or anyone else. Surveys, on the other hand, serve everyone.

“The Sloan Digital Sky Survey set the standard for these huge survey projects,” says White. Sloan, which started operations in 2000, is on its fourth iteration, collecting light with telescopes at Apache Point Observatory in New Mexico and Las Campanas Observatory in Northern Chile. From the early universe to the modern state of the Milky Way’s union, Sloan data has painted a full-on portrait of the universe that, like those creepy Renaissance portraits, will stick around for years to come.

Over in a different part of New Mexico, on the high Plains of San Agustin, radio astronomers recently set the Very Large Array’s sights on a new survey. Having started in 2017, the Very Large Array Sky Survey is still at the beginning of its seven years of operation. But astronomers don't have to wait for it to finish its observations, as happened with the first Pan-STARRS survey. “Within several days of the data coming off the telescope, the images are available to everybody,” says Brian Kent, who since 2012 has worked on the software that processes the data. That's no small task: For every four hours of skywatching, the telescope spits out 300 gigabytes, which the software then has to make useful and usable. “You have to put the collective smarts of the astronomers into the software,” he says.

Kent is excited about the same kinds of time-domain discoveries as White is: about seeing the universe at work rather than as a set of static images. Including the chronological dimension is hot in astronomy right now, from these surveys to future instruments like the LSST and the massive Square Kilometre Array, a radio telescope that will spread across two continents.

By watching for quick changes in their observations, astronomers have sought and found comets, asteroids, supernovae, fast radio bursts, and gamma-ray bursts. As they keep capturing a cosmos that evolves, moves, and bursts forth—not one trapped forever in whatever pose they found it—who knows what else they'll unearth.

Kent, though, is also psyched about the idea of bringing the universe to more people, through the regular internet and more formal initiatives, such as one that has, among other projects, helped train students from the University of the West Indies and the University of the Virgin Islands to dig into the data.

“There’s tons of data to go around,” says White. “And there’s more data than one person can do anything with. It allows people who might not use the telescope facilities to be able to be brought into the observatory.”

No reservations required.

More Great WIRED Stories

  • YouTube and Instagram tots are the new child stars
  • PHOTOS: Wildlife and humans collide on a grand scale
  • With its new 911, Porsche improves the unimprovable
  • ‘Fair’ algorithms can perpetuate discrimination
  • Meth, guns, pirates: The coder who became a crime boss
  • ? Looking for the latest gadgets? Check out our latest buying guides and best deals all year round
  • ? Want more? Sign up for our daily newsletter and never miss our latest and greatest stories

Related Video


Alien Hunting: SETI Scientists on the Search for Life Beyond Earth | WIRED25

As part of WIRED25, WIRED's 25th anniversary celebration in San Francisco, Jill Tarter, author of "The 21st Century: The Century of Biology on Earth and Beyond," and astrobiologist Margaret Turnbull, two of the foremost authorities on search for life beyond Earth come together for a discussion on habitable planets, how life is defined and detected, and finding life in the universe outside Earth.