The Recession’s Silver Lining - IEEE Spectrum

2022-06-25 01:35:28 By : Ms. Lisa Wu

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Countless research institutions contributed to the digital, wireless, and mobile technologies that underpin our modern world. But none contributed more than Bell Telephone Laboratories, which logged an astonishing share of the key advances of the 20th century, including the transistor, the cellphone, the digital signal processor, the laser, the Unix operating system, and motion-picture sound. We no longer have Bell Labs to fund research with long-term payback. That has prompted many to wonder: Who will pay for such research now, and where will it be done?

We say: Governments and corporations must share the burden, and they must do it in structured collaborations among universities, companies, and government agencies in which intellectual property is freely available to all participants.

We also say, the sooner we can get started, the better. The recession has left R&D spending in free fall. This year, the global semiconductor industry is expected to spend just US $200 billion on research—$50 billion less than in 2008. And times are really tough in the semiconductor-equipment industry, whose R&D operations will shrivel like a salted leech from $34 billion in 2007 down to a pitiful $10 billion in 2009.

In the United States, a few basic sciences are getting a reprieve, thanks to the federal stimulus package. Of the $787 billion designated, $10 billion went to the National Institutes of Health for life-sciences research; on the other hand, there has been a steady federal funding decline for physical sciences in the United States.

In the long run, however, even the life sciences are unlikely to benefit in any meaningful way from that load of cash. A one-time infusion helps, but it also creates a classic feast-or-famine problem: The money needs to be spent by September of next year. And because there’s no follow-up money to keep these programs going beyond that time, officials can’t start major long-term initiatives.

But the recession isn’t what’s causing this problem; it’s only revealing an intensifying trend in the semiconductor industry. Revolutionary innovation has been missing in action for about 40 years as the industry instead focused on incremental advances. The industry could get away with short-term research because those incremental advances got the companies where they needed to be, financially speaking.

Limiting that funding to incremental research is why there hasn’t been a “transistor moment” in 50 years. So, painful as it is, this economic gloom might actually turn out to be a good thing. It offers the industry, for the first time in decades, an opportunity to rethink its most basic strategies, down to the engine that keeps it all going—innovation.

Innovation has often been a catalyst for economic recovery. It happened in the 1930s, when DuPont invented one of the major materials of the 20th century: neoprene. Two years after its introduction, neoprene, a synthetic rubber, was in every car and plane built in the United States, and 50 years after that it was in knee braces and wet suits. And again in the 1980s, small steel mills like Geneva and Nucor rose from the ashes of Big Steel. Today many developed countries have stupendous R&D resources and infrastructures and are eager to use them to pursue very high potential payoffs, especially in semiconductors. So the basic factors are in place to use this recession to establish a new model of semiconductor R&D that could usher in the next generation of innovation.

But there’s a problem. The innovation strategies that semiconductor companies large and small have developed over the past half century are grounded in the business practices and research conventions of a bygone era. Unless their strategies evolve to meet these changes, many of those companies will die a slow and avoidable death.

There is a way out. It just doesn’t look anything like the old way out, and it will make some of those companies uncomfortable, at least initially. At the Semiconductor Industry Association (SIA), in San Jose, Calif., we have developed a new model for innovation. Our model is counterintuitive—it asks companies to share intellectual property and invest in research that also benefits competitors, something that’s anathema in today’s standard industry practices. But our approach has been successfully tested by corporations like IBM, Intel, Micron, and Xilinx, among many others. When companies have embraced it, they’ve seen encouraging results. For example, there has already been a significant breakthrough that can be largely attributed to this model: the graphene-based BiSFET logic device, which operates at a fraction of the power of today’s typical devices. The concept is being developed by researchers at the University of Texas at Austin, and if it works as well as the simulations imply, it could change the world.

The BiSFET, we hope, is only the beginning. But it could be the end, if we can’t convince semiconductor executives of the value of our model. (And not everyone is on board with the idea of a “shared innovation environment.”) To better understand the barriers, we interviewed top management at some leading semiconductor companies and universities. Our subjects represented a broad cross-section of the industry, some of which use our model, and some of which do not. These institutions represent a perfect microcosm of the stumbling blocks—and the rewards for letting go of the stifling old ways and making the leap.

Big changes are afoot. The semiconductor industry has been the greatest single source of industrial innovation in recent history. But many of the advances have been incremental, such as the shift to high-k dielectric materials, and the move from aluminum to copper for on-chip interconnects. But as the old saw has it, after you’ve gone from the buggy to the car, building a better buggy whip won’t do you any good. In electronics, building a better triode won’t help. What the industry needs now is more like the shift from vacuum tubes to semiconductors. That’s because two trends are driving the semiconductor industry to a momentous inflection point.

First, the customer is changing. Several hundred million individual consumers, many of them in the developing world, have joined the global economy in just the past few years. Individuals have replaced companies and governments as the dominant buyers of cellphones, laptops, digital cameras, and other high-tech goodies. In fact, large corporate IT departments are no longer the world’s primary technology consumers [see time line, “Vectors of Change”]. These hundreds of millions of customers are atomized into many fragments, they don’t have monolithic tastes, and most important, they’re much more cost-conscious than big companies are.

Second, the metal-oxide-semiconductor transistor—the basic building block of the entire edifice of modern semiconductors—is approaching fundamental physical limits. The “next big thing” won’t be a linear progression of faster and faster computing and communication, laid out in road maps of the sort we’ve issued over the past 20 years. In fact, we don’t know what this next big thing is going to look like, because it could come from anywhere.

Engineering will never return to its isolation in a bubble of mechanics and computers. And therein lies the rub: Today the field has become fantastically inter- and multidisciplinary. So engineers and their companies need to be fluent in a growing panoply of languages: neuroscience, biology, geophysics, and more. The reason is that the next great innovation might come from a neuroscientist whose circuits can mimic the functions of a synapse or from a geologist whose algorithms model the flow of magma inside volcanoes.

However, no one company can be expected to keep tabs on every significant development in academic science and technology. Indeed, conventional semiconductor R&D and strategic marketing departments are often mired in short-term firefights, product deadlines, and meeting the next quarter’s financial goals.

We think the SIA model can do for research in the 21st century what the Bell Labs model did in the 20th. With our model, the industry can draw on a kind of nationwide “neural network” of academic research. To understand what we’re proposing, you need a quick lesson in university-industry interaction as it has existed for the past five decades.

In the existing system, a company consults individual professors with specific research questions, or it invests in local colleges mainly to burnish its image. In the first model, a company hires a star professor or researcher as a consultant and might also fund one or two graduate students for a small, proprietary project. The typical scale of this engagement is $50 000 to $100 000. In the second model, a company invests similar or potentially larger amounts to build goodwill in the community and to supplement its local talent base for recruitment.

These partnerships have yielded incremental advances. But to get to the next big paradigm, we need to innovate the way we innovate. To do that, we have developed our research model, exemplified here by the Nanoelectronics Research Initiative (NRI), one of 11 national centers we have set up to solve the technology showstoppers waiting to meet us in the future.

Here’s how they work. The research takes place not inside one particular company but across multiple universities and various disciplines, all tied together with a common goal. Each center is funded at several million dollars a year, with about 50 universities, 250 faculty, and 450 graduate and postdoctoral students. Companies “buy in” to the research conducted there and then share early results. All the interdisciplinary research centers operate with a nonexclusive intellectual property (IP) model. What that means is that all sponsoring companies have the right to use the IP without paying any royalties, but the university owns it. More on that later.

For NRI specifically, the funding comes from five companies, two U.S. federal agencies, and four state governments. Together, these organizations have invested a total of $20 million per year for the past four years. The NRI focuses on radically new semiconductor logic devices, ones not based on metal-oxide-semiconductor field-effect transistors, or MOSFETs, as virtually all modern chips are. In particular, NRI-hosted research has already produced the new device mentioned earlier: the BiSFET, or bilayer pseudospin field-effect transistor (not to be confused with the bistable field-effect transistor, or BISFET).

Some background: One of the most urgent needs in technology today is for ultralow-power devices. Vacuum tubes could have never been used to build a personal computer. A cellphone or MP3 player created with the bipolar junction or n-type MOS semiconductor technology that was common 30 years ago would suck up so much power that it couldn’t be powered with batteries. All digital information processing is based on variations in electronic charge (for instance, in the capacitor of a dynamic RAM cell), which correspond to a 1 or 0 state. Manipulating charge requires power, which generates heat. Just as previous technology transitions from vacuum tubes to solid-state devices to integrated circuit chips were all driven by power consumption, so will the next transition.

The BiSFET, described by Sanjay Banerjee and Leonard Franklin Register and their colleagues at UT Austin, is in the earliest research phase but offers tremendous potential. The BiSFET could substitute for a MOSFET transistor in logic and memory applications. Like a MOSFET transistor, it can switch and it can amplify. Where the BiSFET stands alone, however, is in its phenomenal power parsimony: It needs only one-hundredth to one-thousandth the power of a standard MOSFET, mainly because it would switch at much lower voltages than a MOSFET.

BiSFETs will not be drop-in replacements for MOSFETs, but in principle, BiSFET-based circuits could replace CMOS circuits in any application. Behind the BiSFET is a theoretical concept that’s not new in physics, but it had been completely beyond the ken of the semiconductor industry. Unlike the silicon channel in a MOSFET, the BiSFET channel is based on graphene, an exotic material consisting of single atomic sheets of the element carbon. Think of these layers as unrolled carbon nanotubes. Also, unlike a CMOS field-effect transistor, which has three terminals—source, drain, and gate—the BiSFET has four terminals: source, drain, and a top and bottom gate, which sandwich two electrically coupled layers of graphene between them. Though the two gates function as one, they must be biased differently to create electrons in one graphene layer and positively charged holes in the other. Interactions between these electrons and holes leads to what’s known as an electron-hole condensate, an esoteric state of matter in quantum physics, in which the particles tend to lose their individuality and display collective behavior. The basic idea has been around for decades, but according to the rules of their strange physical makeup, these condensates could be realized only with exotic materials and at cryogenic temperatures.

The proposed graphene devices require just 25 millivolts, a scant one-fortieth of the operating voltage of today’s “low-power” devices. This device could operate at room temperature and require a thousandth of the power of current devices. The BiSFET is as yet only a concept based on novel predicted physics in a novel material system. We still need experimental verification of the underlying phenomena on which the device is based.

The bottom line is that behind this breakthrough were NRI-assembled teams that included physicists, materials scientists, and electrical engineers specializing in device design. The successful application of graphene’s alchemical properties to semiconductor physics could have happened only within the interdisciplinary research architecture we have created.

In research, as in life, there’s no such thing as one size fits all. When we queried tech companies about our model, we found that few of them would be willing to adopt it. Impediments are often relics of the mind-set created in the last century.

Two main arguments came up again and again: Technology managers said they did not want to share intellectual property or research with competitors, and they did not want to spend money on what they could learn by attending conferences. A more fundamental issue was that many companies, particularly ones forced into short-term strategies, do not consider university research an important part of their business strategy.

By definition, the research performed in a collaborative university environment is shared by many players, including competitors—and potential future competitors. One perceived nightmare scenario, for a corporation, is that of a university professor or student forming her own company to exploit the tech breakthrough. Why should a company invest in research that also benefits its rivals? In terms of time and money, IP is the proverbial “giant sucking sound.” Of course IP is critical, but what’s often misunderstood is that its value depends entirely on the maturity of the technology. Guarding product IP like Cerberus at the gates of hell is not necessarily a wise strategy, especially for early-stage research, which occurs years before an innovation can be brought to market.

The problem here is that semiconductor companies are behaving as if they were pharmaceutical companies. With pharmaceutical discoveries, the early-stage IP is the most important; it would be unthinkable to share the development costs of a Prozac or a Celexa with a competitor. But in the semiconductor industry, no early-stage IP is ever “ready to wear.” There’s lots of cutting, fitting, altering, refitting, and realtering before it’s ready for the runway. Xilinx chief technology officer Ivo Bolsens put it very well when he told us, “There are a hundred decisions and innovations that I will need to add before I can take an excellent academic idea and make it into a product.”

Consider carbon nanotubes. These basic building blocks can be used in many different ways to develop countless different technologies and products. Patenting something so basic would be akin to patenting a brick. Builders can use the same brick to make castles and cottages. The outcomes are vastly different and do not depend in any way on whether that builder has the patent on the brick. And in that sense, the BiSFET device is a stellar example of the kind of early IP that companies are so unwilling to share. No one has even created the device yet—it’s certainly not ready for commercialization. Like the brick, it could lead to a hundred different architectures. And we hope it will.

The other belief, that companies can gain access to early-stage R&D results at conferences, is even easier to dispel. What companies don’t understand is that by the time their researchers read it at a conference, it’s already too little, too late, and too limited. Too little because you see only the tip of the iceberg in the final results; too late because by the time it’s in a paper the research has already been picked over for two years; and too limited—this is the most important point here—because you see only the path that resulted in the positive outcome. You want to be engaged with the full research, not just the condensed summary, boiled down to 20 PowerPoint slides and 20 minutes. You miss all the paths that were taken that were not successful—and that alone is worth the price of admission, because knowing all the dead ends to avoid could save a company millions. These kinds of negative results never get published at conferences.

Any company would be thrilled to achieve a 10 percent reduction in power between product generations. That number is typical of what evolutionary advances can accomplish at their best. Our national centers, by contrast, have enabled revolutionary and discontinuous advances in the last four to five years that haven’t been seen for the last four or five decades.

With devices that perform far better than today’s devices and yet consume a thousandth of the power, we could drastically reduce the consumption of power-hungry server farms that run today’s critical Internet applications but consume enough power for a small city. We could realize “green” residential and transportation systems, a huge opportunity—or perhaps even a necessity, given that the world in 2050 may need 28 terawatts of power, compared with the 15 TW of energy we use today. We might enable a new generation of personal electronics that turn our beloved iPhones into dinosaurs. We might build implantable medical devices that never need external charging, which means they wouldn’t require invasive surgery just to change the battery. The breakthrough research in the centers may even enable radical concepts like “energy scavenging,” where the chip survives entirely on power it draws from its surroundings—that is, from the movements of the person wearing the device.

But none of this will be possible until companies let go of their outdated notions and downright misconceptions.

The challenge today is in finding sources of disruptive scientific innovation. At Bell Labs and the Xerox Palo Alto Research Center, the seeds were planted for today’s technology revolution. No one has the resources to replicate these today, but we believe we can make an alternative model of innovation, updated for the 21st century. It may very well be the key to an epochal change.

Pushkar Apte is vice president of technology programs and George Scalise is president of the Semiconductor Industry Association. They describe the effort to push semiconductor R&D past the end of Moore’s Law in “The Recession’s Silver Lining.”

The authors describe their work establishing the requirements for the Green Flash in more detail in “Towards Ultra-High Resolution Models of Climate and Weather,” which appeared in the May 2008 issue of the International Journal of High-Performance Computing Applications. The material posted at http://www.lbl.gov/cs/html/greenflash.html provides recent updates on their research.

Racial bias led to faulty product design led to its inability to work properly with melanin-rich skin

Rebecca Sohn is a freelance science journalist. Her work has appeared in Live Science, Slate, and Popular Science, among others. She has been an intern at STAT and at CalMatters, as well as a science fellow at Mashable.

If someone is seeking medical care, the color of their skin shouldn’t matter. But, according to new research, pulse oximeters’s performance and accuracy apparently hinges on it. Inaccurate blood oxygen measurements, in other words, made by pulse oximeters have had clear consequences for people of color during the COVID-19 pandemic.

“That device ended up being essentially a gatekeeper for how we treat a lot of these patients,” said Dr. Tianshi David Wu, an assistant professor of medicine at Baylor College of Medicine and one the authors of the study.

For decades, scientists have found that pulse oximeters, devices which estimate blood oxygen saturation, can be affected by a person’s skin color. In 2021, the FDA issued a warning about this limitation of pulse oximeters. The agency says they plan to hold a meeting on pulse oximeters later this year. Because low oxygen saturation, called hypoxemia, is a common symptom of COVID-19, while low blood oxygen levels qualify patients to receive certain medications. In the first study to examine this issue among COVID-19 patients, published in JAMA Internal Medicinein May, researchers found that the inaccurate measurements resulted in a “systemic failure,” delaying care for many Black and Hispanic patients, and in some cases, preventing them from receiving proper medications. The study adds a growing sense of urgency to an issue raised decades ago.

“We found that in Black and Hispanic patients, there was a significant delay in identifying severe COVID compared to white patients.” —Ashraf Fawzy, Johns Hopkins University

Pulse oximeters work by passing light through part of the body, usually a finger. These devices infer a patient's blood-oxygen saturation (i.e. the percentage of hemoglobin carrying oxygen) from the absorption of light by hemoglobin, the pigment in blood that carries oxygen. In theory, pulse oximeters shouldn’t be impacted by anything other than the levels of oxygen in the blood. But research has shown otherwise.

“If you have melanin, which is the pigment that's responsible for skin color… that could potentially affect the transmittance of the light going through the skin,” said Govind Rao, a professor of engineering and director of the Center for Advanced Sensor Technology at the University of Maryland, Baltimore Country, who was not involved in the study.

To examine how patients with COVID-19 were impacted by this flaw in pulse oximeters, researchers used data from over 7,000 COVID-19 patients in the Johns Hopkins hospital system, which includes five hospitals, between March 2020 and November 2021. In the first part of the study, researchers compared blood oxygen saturation measures for the 1,216 patients who had measurements taken using both a pulse oximeter and arterial blood gas analysis, which determines the same measure using a direct analysis of blood. The researchers found that the pulse oximeter overestimated blood oxygen saturation by an average of 1.7 percent for Asian patients, 1.2 percent for Black patients, and 1.1 percent for Hispanic patients.

Then, the researchers used these results to create a statistical model to estimate what the arterial blood gas measurements would be for patients with only pulse oximeter measurements. Because arterial blood gas requires a needle to be inserted into an artery to collect the blood, most patients only have a pulse oximeter measurement.

To qualify for COVID-19 treatment with remdesivir, an antiviral drug, and dexamethasone, a steroid, patients had to have a blood oxygen saturation of 94 percent or less. Based on the researchers’ model, nearly 30 percent of the 6,673 patients they had enough information about to predict their arterial blood gas measurements met this cutoff. Many of these patients, most of whom were Black or Hispanic, had their treatment delayed for between five and seven hours, with Black patients being delayed on average one hour more than white patients.

“We found that in Black and Hispanic patients, there was a significant delay in identifying severe COVID compared to white patients,” said Ashraf Fawzy, assistant professor of medicine at Johns Hopkins University and lead author of the study.

There were 451 patients who never qualified for treatments but that the researchers predicted likely should have; 55 percent were Black, while 27 percent were Hispanic.

The study “shows how urgent it is to move away from pulse [oximeters],” said Rao, and to find alternatives ways of measuring blood oxygen saturation.

Studies finding that skin color can impact pulse oximeters go back as far as the 1980s. Despite knowledge of the issue, there are few ways of addressing it. Wu says increasing awareness helps, and that it also may be helpful to more do more arterial blood gas analyses.

A long-term solution will require changing the technology, either by using a different method entirely or having devices that can better adjust results to account for differences in skin color. One technological alternative is having devices that measure oxygen diffusing across the skin, called transdermal measurement, which Rao’s lab is working on developing.

The researchers said one limitation of their study involved the way patients race was self-identified—meaning a wide range of skin pigmentation could be represented in each of the sample groups, depending on how each patient self-identified. The researchers also did not measure how delaying or denying treatment impacted the patients clinically, for instance how likely they were to die, how sick they were, or how long they were sick. They are currently working on a study examining these additional questions and factors.

Although the problem of the racial bias of pulse oximeters has no immediate solution, said the researchers, they are confident the primary hurdle is not technological.

“We do believe that technology exists to fix this problem, and that would ultimately be the most equitable solution for everybody,” said Wu.

Made in bulk for the first time, this new carbon allotrope is the semiconductor graphene isn't

Prachi Patel is a freelance journalist based in Pittsburgh. She writes about energy, biotechnology, materials science, nanotechnology, and computing.

Researchers have found a way to make graphyne, a long-theoreized carbon material, in bulk quantities. Like its cousin graphene, graphyne is a single layer of carbon atoms but arranged differently.

Since graphene’s discovery 18 years ago—leading to a Nobel Prize in Physics in 2010—the versatile material has been investigated for hundreds of applications. These include strong composite materials, high-capacity battery electrodes, transparent conductive coatings for displays and solar cells, supersmall and ultrafast transistors, and printable electronics.

While graphene is finding its way into sports equipment and car tires for its mechanical strength, though, its highly touted electronic applications have been slower to materialize. One reason is that bulk graphene is not a semiconductor. To make it semiconductive, which is crucial for transistors, it must be produced in the form of nanoribbons with the right dimensional ratios.

There’s another one-dimensional form of carbon related to graphene that scientists first predicted back in 1987, that is a semiconductor without needing to be cut into certain shapes and sizes. But this material, graphyne, has proven nearly impossible to make in more than microscopic quantities.

Now, researchers at the University of Colorado in Boulder have reported a method to produce graphyne in bulk. “By using our method we can make bulk powder samples,” says Wei Zhang, a professor of chemistry at University of Colorado Boulder. “We find multilayer sheets of graphyne made of 20 to 30 layers. We are pretty confident we can use different exfoliation methods to gather a few layers or even a single layer.”

Graphite, diamond, fullerenes, and graphene are all carbon allotropes, and their diverse properties arise from the combination and arrangement of multiple types of bonds between their carbon atoms. So while the 3D cubic lattice of carbon atoms in diamond make it exceptionally hard, graphene’s single layer of carbon atoms in a hexagonal lattice make it extremely conductive.

Graphyne is similar to graphene in that it’s an atom-thick sheet of carbon atoms. But instead of a hexagonal lattice, it can take on different structures of spaced-apart rings connected via triple bonds between carbon atoms.

The material’s unique conducting, semiconducting, and optical properties could make it even more exciting for electronic applications than graphene. Graphyne's intrinsic electron mobility could, in theory, be 50 percent higher than graphene. In some graphynes, electrons can be conducted only in one direction. And the material has other exciting properties such as ion mobility, which is important for battery electrodes.

Zhang, Yingjie Zhao of Qingdao University of Science and Technology, in China, and their colleagues made graphyne using a method called alkyne metathesis. This is a catalyst-triggered organic reaction in which chemical bonds between carbon atoms in hydrocarbon molecules can crack open and reform to reach a more stable structure.

The process is complicated and slow. But it produces enough graphyne for scientists to be able to study the material’s properties in depth and evaluate its uses for potential applications. “It will take at least a couple years to have some fundamental understanding of the material,” says Zhang. “Then it will be in good shape for people to take it to a higher level, which is targeting specific semiconducting or battery applications.”

He and his colleagues plan to investigate ways to produce the material in much larger quantities. Being able to use solution-based chemical reactions would be critical for making graphyne at industrially relevant scales, he says.

It’s just the beginning for graphyne though, and for now, just being able to make this long-hypothesized material in sufficient quantities is an exciting first step. “Fullerenes were discovered in the 1980s, then nanotubes in the early '90s, then graphene in 2004,” Zhang says. “From discovery of a new carbon allotrope to its intensive study to first application, the timeline is becoming shorter. I’m already receiving calls from venture capitalists around the world. But I tell them it’s a little bit early.”

Download the report to learn which job skills students need to build high-growth careers

Get comprehensive insights into higher education skill trends based on data from 3.8M registered learners on Coursera, and learn clear steps you can take to ensure your institution's engineering curriculum is aligned with the needs of the current and future job market. Download the report now!