When | Why |
---|---|
Apr-04-14 | Out of Foster Care, Into College |
WE RELY ON COMPUTERS TO FLY OUR PLANES, FIND OUR CANCERS, DESIGN OUR BUILDINGS, AUDIT OUR BUSINESSES. THAT'S ALL WELL AND GOOD. BUT WHAT HAPPENS WHEN THE COMPUTER FAILS?
On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York. As is typical of commercial flights today, the pilots didn’t have all that much to do during the hour-long trip. The captain, Marvin Renslow, manned the controls briefly during takeoff, guiding the Bombardier Q400 turboprop into the air, then switched on the autopilot and let the software do the flying. He and his co-pilot, Rebecca Shaw, chatted—about their families, their careers, the personalities of air-traffic controllers—as the plane cruised uneventfully along its northwesterly route at 16,000 feet. The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity. Rather than preventing a stall, Renslow’s action caused one. The plane spun out of control, then plummeted. “We’re down,” the captain said, just before the Q400 slammed into a house in a Buffalo suburb.
The crash, which killed all 49 people on board as well as one person on the ground, should never have happened. A National Transportation Safety Board investigation concluded that the cause of the accident was pilot error. The captain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.” An executive from the company that operated the flight, the regional carrier Colgan Air, admitted that the pilots seemed to lack “situational awareness” as the emergency unfolded.
The Buffalo crash was not an isolated incident. An eerily similar disaster, with far more casualties, occurred a few months later. On the night of May 31, an Air France Airbus A330 took off from Rio de Janeiro, bound for Paris. The jumbo jet ran into a storm over the Atlantic about three hours after takeoff. Its air-speed sensors, coated with ice, began giving faulty readings, causing the autopilot to disengage. Bewildered, the pilot flying the plane, Pierre-Cédric Bonin, yanked back on the stick. The plane rose and a stall warning sounded, but he continued to pull back heedlessly. As the plane climbed sharply, it lost velocity. The airspeed sensors began working again, providing the crew with accurate numbers. Yet Bonin continued to slow the plane. The jet stalled and began to fall. If he had simply let go of the control, the A330 would likely have righted itself. But he didn’t. The plane dropped 35,000 feet in three minutes before hitting the ocean. All 228 passengers and crew members died.
The first automatic pilot, dubbed a “metal airman” in a 1930 Popular Science article, consisted of two gyroscopes, one mounted horizontally, the other vertically, that were connected to a plane’s controls and powered by a wind-driven generator behind the propeller. The horizontal gyroscope kept the wings level, while the vertical one did the steering. Modern autopilot systems bear little resemblance to that rudimentary device. Controlled by onboard computers running immensely complex software, they gather information from electronic sensors and continuously adjust a plane’s attitude, speed, and bearings. Pilots today work inside what they call “glass cockpits.” The old analog dials and gauges are mostly gone. They’ve been replaced by banks of digital displays. Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol, terms “a de-skilling of the crew.” No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and a leading authority on automation. When an autopilot system fails, too many pilots, thrust abruptly into what has become a rare role, make mistakes. Rory Kay, a veteran United captain who has served as the top safety official of the Air Line Pilots Association, put the problem bluntly in a 2011 interview with the Associated Press: “We’re forgetting how to fly.” The Federal Aviation Administration has become so concerned that in January it issued a “safety alert” to airlines, urging them to get their pilots to do more manual flying. An overreliance on automation, the agency warned, could put planes and passengers at risk.
The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.
Doctors use computers to make diagnoses and to perform surgery. Wall Street bankers use them to assemble and trade financial instruments. Architects use them to design buildings. Attorneys use them in document discovery. And it’s not only professional work that’s being computerized. Thanks to smartphones and other small, affordable computers, we depend on software to carry out many of our everyday routines. We launch apps to aid us in shopping, cooking, socializing, even raising our kids. We follow turn-by-turn GPS instructions. We seek advice from recommendation engines on what to watch, read, and listen to. We call on Google, or Siri, to answer our questions and solve our problems. More and more, at work and at leisure, we’re living our lives inside glass cockpits.
A hundred years ago, the British mathematician and philosopher Alfred North Whitehead wrote, “Civilization advances by extending the number of important operations which we can perform without thinking about them.” It’s hard to imagine a more confident expression of faith in automation. Implicit in Whitehead’s words is a belief in a hierarchy of human activities: Every time we off-load a job to a tool or a machine, we free ourselves to climb to a higher pursuit, one requiring greater dexterity, deeper intelligence, or a broader perspective. We may lose something with each upward step, but what we gain is, in the long run, far greater.
History provides plenty of evidence to support Whitehead. We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead. But Whitehead’s observation should not be mistaken for a universal truth. He was writing when automation tended to be limited to distinct, well-defined, and repetitive tasks—weaving fabric with a steam loom, adding numbers with a mechanical calculator. Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans. That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus. We trade subtle, specialized talents for more routine, less distinctive ones.
Most of us want to believe that automation frees us to spend our time on higher pursuits but doesn’t otherwise alter the way we behave or think. That view is a fallacy—an expression of what scholars of automation call the “substitution myth.” A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part. As Parasuraman and a colleague explained in a 2010 journal article, “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers of automation.”
Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.
Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer. Many radiologists today use analytical software to highlight suspicious areas on mammograms. Usually, the highlights aid in the discovery of disease. But they can also have the opposite effect. Biased by the software’s suggestions, radiologists may give cursory attention to the areas of an image that haven’t been highlighted, sometimes overlooking an early-stage tumor. Most of us have experienced complacency when at a computer. In using e-mail or word-processing software, we become less proficient proofreaders when we know that a spell-checker is at work.
The way computers can weaken awareness and attentiveness points to a deeper problem. Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise. Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them. The effect, it has since become clear, influences learning in many different circumstances. When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activity. It assembles a rich store of information and organizes that knowledge in a way that allows you to tap into it instantaneously. Whether it’s Serena Williams on a tennis court or Magnus Carlsen at a chessboard, an expert can spot patterns, evaluate signals, and react to changing circumstances with speed and precision that can seem uncanny. What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
In 2005, Christof van Nimwegen, a cognitive psychologist in the Netherlands, began an investigation into software’s effects on the development of know-how. He recruited two sets of people to play a computer game based on a classic logic puzzle called Missionaries and Cannibals. To complete the puzzle, a player has to transport five missionaries and five cannibals (or, in van Nimwegen’s version, five yellow balls and five blue ones) across a river, using a boat that can accommodate no more than three passengers at a time. The tricky part is that cannibals must never outnumber missionaries, either in the boat or on the riverbanks. One of van Nimwegen’s groups worked on the puzzle using software that provided step-by-step guidance, highlighting which moves were permissible and which weren’t. The other group used a rudimentary program that offered no assistance.
As you might expect, the people using the helpful software made quicker progress at the outset. They could simply follow the prompts rather than having to pause before each move to remember the rules and figure out how they applied to the new situation. But as the test proceeded, those using the rudimentary software gained the upper hand. They developed a clearer conceptual understanding of the task, plotted better strategies, and made fewer mistakes. Eight months later, van Nimwegen had the same people work through the puzzle again. Those who had earlier used the rudimentary software finished the game almost twice as quickly as their counterparts. Enjoying the benefits of the generation effect, they displayed better “imprinting of knowledge.”
What van Nimwegen observed in his laboratory—that when we automate an activity, we hamper our ability to translate information into knowledge—is also being documented in the real world. In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so. One recent study, conducted by Australian researchers, examined the effects of systems used by three international accounting firms. Two of the firms employed highly advanced software that, based on an accountant’s answers to basic questions about a client, recommended a set of relevant business risks to be included in the client’s audit file. The third firm used simpler software that required an accountant to assess a list of possible risks and manually select the pertinent ones. The researchers gave accountants from each firm a test measuring their expertise. Those from the firm with the less helpful software displayed a significantly stronger understanding of different forms of risk than did those from the other two firms.
What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages. Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed to the example of driving a car, which requires not only the instantaneous interpretation of a welter of visual signals but also the ability to adapt seamlessly to unanticipated situations. “Executing a left turn across oncoming traffic,” two prominent economists wrote in 2004, “involves so many factors that it is hard to imagine the set of rules that can replicate a driver’s behavior.” Just six years later, in October 2010, Google announced that it had built a fleet of seven “self-driving cars,” which had already logged more than 140,000 miles on roads in California and Nevada.
Driverless cars provide a preview of how robots will be able to navigate and perform work in the physical world, taking over activities requiring environmental awareness, coordinated motion, and fluid decision making. Equally rapid progress is being made in automating cerebral tasks. Just a few years ago, the idea of a computer competing on a game show like Jeopardy would have seemed laughable, but in a celebrated match in 2011, the IBM supercomputer Watson trounced Jeopardy’s all-time champion, Ken Jennings. Watson doesn’t think the way people think; it has no understanding of what it’s doing or saying. Its advantage lies in the extraordinary speed of modern computer processors.
In Race Against the Machine, a 2011 e-book on the economic implications of computerization, the MIT researchers Erik Brynjolfsson and Andrew McAfee argue that Google’s driverless car and IBM’s Watson are examples of a new wave of automation that, drawing on the “exponential growth” in computer power, will change the nature of work in virtually every job and profession. Today, they write, “computers improve so quickly that their capabilities pass from the realm of science fiction into the everyday world not over the course of a human lifetime, or even within the span of a professional’s career, but instead in just a few years.”
Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation? The technology theorist Kevin Kelly, commenting on the link between automation and pilot error, argued that the obvious solution is to develop an entirely autonomous autopilot: “Human pilots should not be flying planes in the long run.” The Silicon Valley venture capitalist Vinod Khosla recently suggested that health care will be much improved when medical software—which he has dubbed “Doctor Algorithm”—evolves from assisting primary-care physicians in making diagnoses to replacing the doctors entirely. The cure for imperfect automation is total automation.
That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect. All of the parts may work flawlessly, but a small error in system design can still cause a major accident. And even if a perfect system could be designed, it would still have to operate in an imperfect world.
In a classic 1983 article in the journal Automatica, Lisanne Bainbridge, an engineering psychologist at University College London, described a conundrum of computer automation. Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible. People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at. Research on vigilance, dating back to studies of radar operators during World War II, shows that people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.” And because a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching. The lack of awareness and the degradation of know-how raise the odds that when something goes wrong, the operator will react ineptly. The assumption that the human will be the weakest link in the system becomes self-fulfilling.
Psychologists have discovered some simple ways to temper automation’s ill effects. You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning. You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing. Giving people more to do helps sustain the generation effect. You can incorporate educational routines into software, requiring users to repeat difficult manual and mental tasks that encourage memory formation and skill building.
Some software writers take such suggestions to heart. In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter. The average temperature hovers at about 20 degrees below zero, thick sheets of sea ice cover the surrounding waters, and the sun is rarely seen. Despite the brutal conditions, Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
Inuit culture is changing now. The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why. The ease and convenience of automated navigation makes the traditional Inuit techniques seem archaic and cumbersome.
But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails. The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid. The anthropologist Claudio Aporta, of Carleton University in Ottawa, has been studying Inuit hunters for more than 15 years. He notes that while satellite navigation offers practical advantages, its adoption has already brought a deterioration in way-finding abilities and, more generally, a weakened feel for the land. An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it. A unique talent that has distinguished a people for centuries may evaporate in a generation.
Whether it’s a pilot on a flight deck, a doctor in an examination room, or an Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are. Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want? If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.
This article available online at:
http://www.theatlantic.com/magazine/archive/2013/11/the-great-forgetting/309516/
Added April 04, 2014 at 9:09pm
Title: Out of Foster Care, Into College
BY definition, foster children have been delinquent, abandoned, neglected, physically, sexually and/or emotionally abused, and that does not take into account nonstatutory abuses like heartache. About two-thirds never go to college and very few graduate, so it’s a safe bet that those who do have an uncommon resilience.
In a society where many young men and women live with their parents well into their 20s, foster children learn quickly that they are their own responsibility. To find someplace to live in 10th grade Kaleef Starks, now an A student at the University of California, Los Angeles, but back then (to use his words) a gay, effeminate, abused teenager, went to the local library, logged onto a computer and Googled “homeless shelters for youth.”
His closest friend at U.C.L.A., Bianca Boccara, had parents who made her go panhandling with them because they knew passers-by would be more likely to donate if they saw a young child.
By the time he was 18, Manny Roque, now a student at Los Angeles City College, had lived in seven foster homes and attended five high schools. He was raised by a mother who was a crack addict and prostitute, growing up in such chaos that he did not go to school until sixth grade and only then did he understand how abnormal it was for a boy his age not to be able to name the letters of the alphabet.
One of his classmates, Shamir Moorer, “born with crack inside me,” estimates she’s lived in “15 or 17” foster and group homes. She can’t say for sure because she can’t remember the earliest ones, having been placed in care as a baby.
•
IN a 2010 study by researchers at the University of Chicago, only 6 percent of former foster youths had earned a two- or four-year degree by age 24. Those not in college may be in jail; 34 percent who had left foster care at age 17 or 18 reported being arrested by age 19.
Most of the research is bleak — but not all. It appears that extra support can make a difference. The Chicago study tracked the lives of about 700 foster children in Illinois, Iowa and Wisconsin. Those in Illinois who were still getting foster care services at age 19 were less likely to have been arrested (22 percent versus 34 percent) than those in the other two states who were on their own. The same was true for education. The foster children from Illinois, which has long allowed young people to remain in care until their 21st birthday, were more likely to have completed at least one year of college than their counterparts from Iowa or Wisconsin, where the age of emancipation at the time was 18.
Which is why a growing number of colleges — from those that are selective, like U.C.L.A., to those that are not, like Los Angeles City College — have created extensive support programs aimed at current and former foster young people. At U.C.L.A., this includes scholarships, year-round housing in the dorms for those who have no other place to live, academic and therapeutic counseling, tutoring, health care coverage, campus jobs, bedding, towels, cleaning products, toiletries and even occasional treats. Ms. Boccara mentioned the gift cards she was given to a local supermarket. At Los Angeles City College, Marcellia Goodrich likes the free snacks in the program office and Mr. Roque noted the free paper. “It’s useful and helps you stay on budget,” he said.
No one tracks college programs for foster youth. But it is clear there has been considerable growth in recent years, spurred in part by the creation in 2003 of the Chafee grant program, an annual $48 million federal appropriation used to award scholarships of up to $5,000. Also important was federal legislation in 2008 giving states the option of extending federal aid programs for foster youth from age 18 to 21.
Seven states are considered to have particularly strong programs. California’s is known as the Guardian Scholars. Texas, Ohio and North Carolina call theirs Reach; Michigan has Fostering Success Michigan; Washington, Passport to College Promise; and Virginia, Great Expectations. Many colleges provide some services, but a far smaller number have the kinds of comprehensive support systems offered at places like Western Michigan University, Sam Houston State University, City College of San Francisco, and community colleges in Tallahassee, Fla., and Austin, Tex.
California has the largest foster population — about 54,000 of the 400,000 in care nationally — and Los Angeles, with 18,500 children, has the most among cities, more even than New York, which has about 14,000.
U.C.L.A. began identifying foster students five years ago when it introduced its Guardian Scholars program, and the results are promising. There are now 250 current and former foster students at the university. The first group had a four-year graduation rate of 65 percent and a five-year rate of 80 percent, which compares favorably with rates for all low-income students (61 percent and 84 percent) and campuswide (69 percent and 88 percent).
According to the state website, 33 two- and four-year colleges have a Guardian Scholars program or are in the process of developing one. The first, at California State University, Fullerton, started in 1998 with financial backing from Ronald V. Davis, the former chief executive of the Perrier Group. Philanthropy has played a role at several universities. Paul Blavin, who made his fortune as an investor, has financed programs at the University of Michigan and Northern Arizona University. The Pritzker Foundation recently gave $3 million to U.C.L.A.’s program. Casey Family Scholars provides scholarships and support services directly to students, an average of $3,500 a year to about 220 undergraduates.
Why treat foster youth differently from other low-income students?
Janina Montero, a vice chancellor who developed U.C.L.A.’s program, said most poor children have at least one parent to guide them and provide basic needs — and a place to call home.
There are 25 students at U.C.L.A., including Mr. Starks and Ms. Boccara, who have nowhere to go during holiday and summer breaks, and so live in the dorms year-round. And while spending Thanksgiving and Christmas in a nearly empty 10-story residence hall may sound depressing, it is better, Ms. Boccara said, than her alternative, some cheap motel room. During her first Christmas at school, she said, “I stayed busy, but it was pretty lonely here — no one I knew, just foreign students in the dorm. It does get a person down.”
Mr. Starks had a hard time, too: “The first night I cried — a young adult in college on my own.” But he also said, “I don’t let brokenness get in my way; I’ve always known if I worked hard, doors would open.”
The biggest door so far opened in the spring of 2011, when he was accepted to U.C.L.A. For Mr. Starks, it was validation that he was the kind of person he’d believed himself to be.
“To celebrate,” he said, “they had a dinner for me at the group home.”
Last summer, living in a dorm was not so hard because by then Ms. Boccara and Mr. Starks had become friends. Ms. Boccara is pretty sure her roommate during the school year has a sense of her background but not the details. “She hears me on the phone, but I don’t talk about it,” Ms. Boccara said. “The only person I tell is Kaleef.”
•
THINGS are harder at Los Angeles City College.
Nearly three-quarters of the students qualify for an annual tuition waiver because they are too poor to pay the $1,400. The average age is 30, meaning many have probably failed at college before. The graduation rate is only 6 percent within six years (versus the national community college average of about 14 percent in three years).
City College, like most community colleges, does not have dorms, and most students find housing through social welfare agencies. Mr. Roque lives in housing financed by First Place for Youth, a nonprofit organization that serves former foster children. He was 18 when he left the last of seven homes, in 2010, and as is true for many foster youth, there are pieces of his story he doesn’t know. He was too young to remember why he was removed from the first two homes. Another house was closed for violations. His fourth foster mother planned to adopt him, he said, but changed her mind.
Last spring, Mr. Roque was rejected by Los Angeles City College for having failed to make adequate academic progress; he says he had twice attended area community colleges, but withdrew without earning any credits. To be reconsidered, he had to write a letter of appeal explaining any mitigating circumstances, which in Mr. Roque’s case was homelessness.
He wrote of the transition from foster care to no care: “Couch surfing, from relatives who did not like me to friends who got tired of me. I was not able to concentrate in school or even keep my grade-point average at a 2.0.” He also filled a single-spaced page with several more mitigating circumstances and in his conclusion promised to do better: “I have such a good support system in place, people who expect a lot from me, and I am in a place where I can make school a priority. I am serious this time.”
Mr. Roque was better than his word, said Jon Lee, director of tutoring for the Guardian Scholars.
“All summer he was the first one in each morning,” he said. “He’d put on headphones, sit at the computer with a dictionary and find a website to practice grammar. He’d show me sentences and ask if they were right.”
New foster students take a mandatory course that includes lists of proper study skills, which Mr. Roque has memorized.
“Sit in front of the room,” he recited for me. “Visit your professors during visiting hours. Have a notebook and binder for each class, not all stuffed in one. Take notes for every class and give yourself five to eight minutes after class to go over them. Ask questions even if they sound silly. Be prompt and considerate with assignments and attendance.”
He paused. “Do all they say and you’ll be golden.”
In addition to the new friends he’s made, the computer he won at a raffle sponsored by the Scholars, and knowing that he will not age out of the housing program until March 27, 2015, Mr. Roque has a job working four shifts a week at the Subway restaurant across from the college. “Life’s the best it’s ever been,” he said.
WHEN Mr. Lee started tutoring at the community college, he thought he was going to save everyone who walked through the door. He believed it was just a matter of putting in the hours. But early on, he watched as most cascaded through the cracks. There have been 300 students in the program for at least a semester since it began in 2009, and so far 14 have graduated or transferred to a four-year college.
“It’s hopeless to help students who haven’t come to the realization that they have to want to do it on their own,” he said.
For Randy Davis, who is on the honor roll and is the student government parliamentarian, the light bulb went off late, two years ago, when he was 25. Part of the problem was his family, he said. He never knew his father, hasn’t seen his mother in 10 years, his grandmother who had cared for him is dead and two brothers are in jail.
But partly, he said, it was him. He has been arrested multiple times on drug charges.
“I can’t put it all on others,” he said. “I have no bad feelings toward my biologicals. I tell people, ‘Leave the past in the past,’ but I hear people say, ‘If I had parents I would have done better.’ Well, you don’t have parents. You got to learn to roll with it.”
Mr. Davis, Ms. Goodrich and Ms. Moorer have been known as the Three Peas in a Pod since they formed a study group last year that helped them all get A’s in a geology course. But while Mr. Davis and Ms. Goodrich are applying to four-year colleges, Ms. Moorer, who’s 27, dropped out recently.
On the Friday I first interviewed her, she said, “I haven’t told anyone, but today is my last day.” She is a single mother and said the reasons had to do with her 8-year-old son. “I’m going all the time. There are nights we don’t eat until 9,” she said. “This weekend I’m making a new plan.”
But the following week there still was no plan, except that she did not want the Scholars staff to know. “They’ll pressure me to come back,” she said. “I’ve broken down a couple of times in my life and I can feel it coming on, so I just need to slow it down.”
When Veronica Garcia, who runs the Scholars program at the community college, found out, she pushed for Ms. Moorer to take at least one course, and the young woman promised that she would. Ms. Moorer said she’s fearful of being one of those women who is 40 years old and still trying to get her associate degree.
THE leap from foster care to U.C.L.A. is enormous, and there are things those who reach the far side share. Ms. Boccara said that from the time she was little, school was the one place she felt happy. Mr. Starks said his teachers intuited the harshness of his life and encouraged him. Donovan Arrington, a U.C.L.A. sophomore, said he always felt he was imposing on people; doing well in school made him feel less of a burden.
Having come so far, they are not willing to be overlooked. When Rayvonn Anthony Lee, who has overcome abuse, legal problems, serious illness and homelessness, was rejected by U.C.L.A., he appealed. He is now a senior, majoring in contemporary classical music composition. One of his professors, Adam Schoenberg, said Mr. Lee is a great student, but what stands out more is his will.
“I was able to get to know him after class,” Mr. Schoenberg wrote in an email, as he was always very ambitious. “He would ask to show me his compositions several times after theory class, and he even took a bus — because he didn’t have a car at the time — from the west side all the way to downtown just to have a lesson with me (and it probably took him close to two hours with all of the bus changes). It was clear from the get-go that Anthony is hungry to make it as a composer.”
Angel Gabarret has lived on the streets of Los Angeles before and does not intend to again. In September, he used the money he’d earned working at a carwash to pay his rent through the end of the year. “Now I just have to worry about education and food and that’s it,” he said.
He was born in Honduras, orphaned at a young age and, still a child, made his way north to Guatemala, then Mexico, and finally the United States, a street urchin supporting himself by gathering firewood, hawking newspapers and helping women sell vegetables in local markets. He didn’t learn English until he was 18 and it has taken him five years to graduate from City College, but now at 27, with a B average, he is applying to California State University, Long Beach.
Most days he arrives on campus by 7 a.m. and works at an outdoor bench. Afternoons, he goes for tutoring at the Scholars office. After several false starts, he has decided on a career.
“I think accounting is doable for me,” he explained. “I understand the concepts. I’ve learned the accounting equations. It’s a very, very clean job. You work indoors in a nice office. You’re supposed to dress up with shiny shoes and dressed-up pants.”
Michael Winerip moderates the Booming blog of The Times and has covered education and parenting.
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
General Document Comments 0