Monday, December 3, 2012

Gov. Scott challenges colleges to offer $10k degree; Dems call it ‘Walmart of Education’

Date: Monday, November 26, 2012, 2:35pm EST http://www.bizjournals.com/jacksonville/news/2012/11/26/gov-scott-challenges-colleges-to.html?ana=RSS&s=article_search

TALLAHASSEE—Gov. Rick Scott “challenged” state colleges to create $10,000 four-year degrees, a continuation of his low-cost strategy for higher education that Democrats slammed as an attempt to turn the schools into “the Walmart of Education.”

Scott issued his challenge in a media blitz and a morning press conference at St. Petersburg College, with another event scheduled for Orlando in the afternoon.

“You should be able to work and go to school and not end up with debt,” Scott told WFLA TV, according to a transcript provided by his office. “If these degrees cost so much money, tuition is so high, that’s not going to happen. I have put out this challenge to our state colleges -- we have 28 great state colleges -- and say, ‘Can you come up with degrees where individuals can get jobs that the total degree costs $10,000?’”
State colleges are generally what used to be known as community colleges, though many of them now offer four year degrees.

The proposal echoes a similar push by Texas Gov. Rick Perry, Scott’s political idol, for $10,000 degrees in that state. It also comes as Scott has made containing the costs of higher education a top priority after colleges and universities say years of budget cuts have forced tuition hikes.

At the morning press conference, St. Petersburg College President William Law said his school would accept the challenge.

“St. Petersburg College is once again excited about the opportunity to be part of a statewide college pilot program that lowers the cost of a college education for the citizens we serve,” Law said in a press release. “Affordable education always has been at the forefront of the college‚s mission.”

Wednesday, October 31, 2012

The Brain Trainers

IN the back room of a suburban storefront previously occupied by a yoga studio, Nick Vecchiarello, a 16-year-old from Glen Ridge, N.J., sits at a desk across from Kathryn Duch, a recent college graduate who wears a black shirt emblazoned with the words “Brain Trainer.” Spread out on the desk are a dozen playing cards showing symbols of varying colors, shapes and sizes. Nick stares down, searching for three cards whose symbols match.
 
Nick Vecchiarello, 16, of Glen Ridge, N.J., finds the patterns in a LearningRx game.

“Do you see it?” Ms. Duch asks encouragingly.

“Oh, man,” mutters Nick, his eyes shifting among the cards, looking for patterns.

Across the room, Nathan Veloric, 23, studies a list of numbers, looking for any two in a row that add up to nine. With tight-lipped determination, he scrawls a circle around one pair as his trainer holds a stopwatch to time him. Halfway through the 50 seconds allotted to complete the exercise, a ruckus comes from the center of the room.

“Nathan’s here!” shouts Vanessa Maia, another trainer. Approaching him with a teasing grin, she claps her hands like an annoying little sister. “Distraction!” she shouts. “Distraction!”

There is purpose behind the silliness. Ms. Maia is challenging the trainees to stay focused on their tasks in the face of whatever distractions may be out there, whether Twitter feeds, the latest Tumblr posting or old-fashioned classroom commotion.

On this Wednesday evening at the Upper Montclair, N.J., outlet of LearningRx, a chain of 83 “brain training” franchises across the United States, the goal is to improve cognitive skills. LearningRx is one of a growing number of such commercial services — some online, others offered by psychologists.

Unlike traditional tutoring services that seek to help students master a subject, brain training purports to enhance comprehension and the ability to analyze and mentally manipulate concepts, images, sounds and instructions. In a word, it seeks to make students smarter.

“We measure every student pre- and post-training with a version of the Woodcock-Johnson general intelligence test,” said Ken Gibson, who began franchising LearningRx centers in 2003, and has data on more than 30,000 of the nearly 50,000 students who have been trained. “The average gain on I.Q. is 15 points after 24 weeks of training, and 20 points in less than 32 weeks.”

The three other large cognitive training services — Lumosity, Cogmed and Posit Science — dance around the question of whether they truly raise I.Q. but do assert that they improve cognitive performance.

“Your brain, just brighter,” is the slogan of Lumosity, an online company that now has some 25 million registered members. According to its Web site, “Our users have reported profound benefits that include: clearer and quicker thinking; faster problem-solving skills; increased alertness and awareness; better concentration at work or while driving; sharper memory for names, numbers and directions.”

Those results are achieved, the companies say, by repurposing cognitive tasks initially developed by psychologists as tests of mental abilities. With technical names like the antisaccade, the N-back and the complex working memory span task, the exercises are dressed up as games that become increasingly difficult as students gain mastery.

Conceived to appeal to adults, especially baby boomers looking to stanch the effects of aging, Lumosity now draws one-quarter of its audience from students between the ages of 11 and 21, according to Michael Scanlon, the company’s scientific director. “I was taken aback that so much of our user base is so young,” he said. “The particular audience I had in mind at the earliest stages of the company was my mother.” In response to requests from schoolteachers, the fee is now waived — $15 a month — for students in their classrooms. More than 1,000 teachers and 10,000 students have enrolled this year, Mr. Scanlon said.

For the one-on-one training at LearningRx, fees are decidedly higher, from about $80 to $90 an hour in Upper Montclair. The students come with learning disabilities, with grades they want to improve in a competitive academic environment, all with hopes of just being sharper.

TAYLOR WEBSTER, 16, trains daily for lacrosse with a personal coach. “She has natural athletic ability,” said her mother, Samantha Newman-Webster. “But it’s really through her training that she has been able to achieve to the point where she’s being sought out by college recruiters.” Would brain training, the family wondered, do for her grades what physical training did for her lacrosse game?

Ms. Newman-Webster enrolled Taylor, already a B student at the private Montclair Kimberley Academy, at LearningRx in February. “I felt like I wanted to do whatever I could to make her learning easier, faster, deeper,” she said. “I knew she was going to be taking the SATs, and they say it will improve whatever you’re trying to do.”

Speaking by cellphone on the way to a lacrosse game, Taylor explained, with a laugh, what it’s like: “In the beginning your head is sore. Honestly, I had headaches after going there the first few times. It was kind of tedious and made my brain hurt. But I started getting better and better at it. It kind of became a competition for me to do better each time.”

She’s now studying for the SAT. “It used to take me an hour to memorize 20 words. Now I can learn, like, 40 new words in 20 minutes.”

Others — a majority, according to LearningRx — seek cognitive training in the hopes of remediating a learning disability.

Nathan Veloric had learning issues since elementary school. Last December, he had just graduated from William Paterson University with a degree in communications when his mother heard about LearningRx from a business networking group. His goal was to build up skills. “I’ve got to keep on bettering myself,” said Mr. Veloric, whose first job out of college is as a part-time cashier at a CVS near his home in New Providence, N.J. “I’m happy to have a job in this economy. While looking for something better I’m working my way up at CVS — I’m trying to go full time and then get into their management training program.”

Of his brain training, he said, “I don’t know if it makes you smarter. But when you get to each new level on the math and reading tasks, it definitely builds up your self-confidence.”

Nick Vecchiarello struggles with attention deficit disorder. “During middle school we had every kind of tutor known to man,” said his mother, Diane. “Name it, we’ve done it” — stimulant medication, sessions with a psychologist. “He never liked anything to do with education.” A brochure from LearningRx showed up in the mail, and the scientific aura around the program impressed the Vecchiarellos. They decided to spend $12,000 for a year of visits, one to three times a week.

“It has been a financial strain,” acknowledged Nick’s father, Richard, a fifth-grade teacher in nearby Fair Lawn. “Yes, I think it’s made a change in Nick. His grades are better. If it gives him a leg up on life, you can’t put a price on that.” In September, after six months, Nick and his parents decided he had
made enough progress to stop his medication.

For all the glowing testimonials, there are postings to be found online from parents of children with learning disabilities, complaining about substantial fees and minimal benefit.

Whether the results last beyond the blush of training — indeed, whether I.Q. can truly be bolstered in a meaningful way — are questions on which serious scientists still disagree. Studies have been published in recent years finding that intelligence can be improved through training, but not enough of them are of sufficient scientific quality to convince everyone in the field.

One skeptic is Douglas K. Detterman, professor of psychology at Case Western Reserve University and founding editor of the influential academic journal Intelligence. His research would seem to offer reassurance to college-bound brain trainees, because he has found a close correlation between I.Q. and SAT scores. “All of these tests are pretty much the same thing,” he said. “They measure general intelligence.”

The catch, however, is that Dr. Detterman believes that cognitive training only makes people better at taking tests, without improving their underlying intelligence. Dr. Detterman said of brain training, “It’s probably not harmful. But I would tell parents: Save your money. Look at the studies the commercial services have done to support their results. You’ll find very poorly done studies, with no control groups and all kinds of problems.”

Executives at traditional tutoring and test-prep services tend to share Dr. Detterman’s view — perhaps not surprisingly, because some of the brain training programs pitch themselves in direct contrast to standard tutoring. (“Brain Training vs. Tutoring,” says the headline of a LearningRx brochure. “Is tutoring what your child really needs?”) Bror Saxberg, chief learning officer of Kaplan Inc., questions whether improving performance on an intelligence test will translate directly to improved grades and test scores.

“I keep looking for good studies that show how math performance or an ability to write an essay or some other really important set of skills have been dramatically enhanced for normal kids,” Dr. Saxberg said. “What you care about is not an intelligence test score, but whether your ability to do an important task has really improved. That’s a chain of evidence that would be really great to have. I haven’t seen it.” Dr. Saxberg, by the way, holds a master’s in mathematics from Oxford University, a Ph.D. in electrical engineering and computer science from the Massachusetts Institute of Technology, and an M.D. from Harvard Medical School.

Still,a new and growing body of scientific evidence indicates that cognitive training can be effective, including that offered by commercial services.

Oliver W. Hill Jr., a professor of psychology at Virginia State University in Petersburg, recently completed a $1 million study, yet to be published, financed by the National Science Foundation to test the effects of LearningRx. He looked at 340 middle-school students who spent two hours a week for a semester using LearningRx exercises in their schools’ computer labs and an equal number of students who received no such training. Those who played the online games, Dr. Hill found, not only improved significantly on measures of cognitive abilities compared to their peers, but also on Virginia’s annual

He’s now conducting a follow-up study of college students in Texas and, he said, sees even stronger gains when the training is offered one on one.

Michael Merzenich, who spent years conducting brain plasticity research in animals as a professor at the University of California, San Francisco, started Posit Science to make the results of his research more widely available. “This is medicine,” he insisted. “It is driving changes in the brain.”

The programs offered by Posit, Lumosity and Cogmed are now being used by psychologists not affiliated with the companies to help people with diagnosed cognitive disorders, including traumatic brain injury, A.D.H.D., and the aftereffect of chemotherapy.

Kristina K. Hardy, a neuropsychologist at Children’s National Medical Center in Washington, is testing the use of Cogmed with childhood cancer survivors, whose ability to learn is sometimes significantly reduced after chemotherapy and radiation. Founded by a Swedish neuroscientist, Cogmed was bought in 2010 by Pearson, the largest provider of testing and teaching materials, and is offered via psychologists and other clinical specialists.

“I entered this work with some skepticism that just doing some computer work at home could help anybody,” she said. “I thought we wouldn’t be able to move the needle on their cognitive abilities. And not everybody has been able to make gains. But I’ve had some kids who not only reported that they had very big changes in the classroom, but when we bring them back in the laboratory to do neuropsychological testing, we also see great changes. They show increases that would be highly unlikely to happen just by chance.”

Julie Schweitzer, director of the A.D.H.D. Program at the University of California, Davis, published a study in July finding that children with attention deficit hyperactivity disorder who used Cogmed for 25 days were significantly better able to stay on task and to perform on a test of working memory — the ability to not just hold but to juggle items in the mind despite brief distractions.

“We got positive results, but it was a very small study,” she said. It involved just 26 children. Even so, she said: “In general, I’m cautiously optimistic about the potential for cognitive training. I’m concerned that some of the studies out there have not had the rigor that ought to be there. But I think the potential is there.”

AT Lumosity’s headquarters on the sixth floor of a rehabbed building in downtown San Francisco, bicycles line a wall, the meeting room has foosball, and the intensely focused young employees tap at their computers in a sprawling room without cubicles. It could be mistaken for a satellite office of Google. Except, oh, wait a minute, that guy who won the American Crossword Puzzle Tournament five times in a row? He actually quit Google last year to work here.

“I looked around for a place that would get me closer to the kinds of games and puzzles I enjoy,” said Tyler Hinman, who is now a software developer and game designer at Lumosity. “But where crosswords and Sudoku are intended to be a diversion, the games here give that same kind of reward, only they’re designed to improve your brain, your memory, your problem-solving skills.”

More than 40 games are offered by Lumosity. One, the N-back, is based on a task developed decades ago by psychologists. Created to test working memory, the N-back challenges users to keep track of a continuously updated list and remember which item appeared “n” times ago. Practice on the N-back has been shown in some studies to lead to significant increases in fluid intelligence. Unlike crystallized intelligence, the mental storehouse of knowledge and procedures, fluid intelligence is the ability to solve novel problems, to see patterns and understand complex relationships — to find order in the chaos.

Not all the exercises offered by the commercial services carry the scientific pedigree of the N-back. Some offered by LearningRx exude an undeniable whiff of the theatrical, like having trainers shout and clap to help students learn to ignore distractions.

Perhaps that reflects the company’s origins. Whereas the founders of Posit, Cogmed and Lumosity all have advanced degrees in psychology and neuroscience, the founder of LearningRx obtained his Ph.D. in pediatric optometry.

“Largely my focus was on visual training,” Dr. Gibson said. Treating children with problems involving focusing or eye movement, he developed an interest in dyslexia and other learning disorders. “I realized I could help those who had eyes crossed, but I wasn’t helping very much with their academic performance,” he said. “I started reading the literature about training abilities of every skill, not just visual, but auditory and memory and speed of processing.”

Dr. Gibson is self-taught in the field of psychology; his confidence in his program, he said, comes from the gains students make on I.Q. tests. Trainers and franchise owners must be college graduates but need not have expertise — beyond the training given to them by LearningRx. Ms. Duch and Ms. Maia, the Montclair trainers, have B.A.’s in psychology.

“This has been a process since 1986,” Dr. Gibson said. “We have so systematized the program that the educational background of the trainers and franchise owners is not an issue. I don’t come from the perspective of an academic. We’re not part of Duke University or Harvard. We have to get results to justify the fees that we charge and get referrals.”

Back at the franchise in Upper Montclair, Nathan Veloric is trying to do his “speed numbers” exercise just a bit faster, in 45 seconds rather than 50, still without missing a single pair of numbers that add up to nine. Four times in a row he goes down a list, each time missing just one of the pairs.

“O.K., try it again,” says Ms. Duch. “I know you’re getting tired. Just give me two more tries.” And again she starts the timer.

Dan Hurley is a neuroscience reporter writing a book on new research into intelligence.

Friday, October 26, 2012

How to Deal with Non-cooperative Students as a Tutor

In a classroom setting, a student is surrounded by their classmates where everyone is expected to pay attention to the instructor. Not only are a student’s actions monitored by the teacher, their behavior is observed by their friends and classmates. Tutoring a student is usually done in a much different environment. A tutor works one-on-one with the student, many times in his/her own home and the student’s parent(s) may or may not be present for the tutoring session. These factors can sometimes lead to dealing with a student who is completely uncooperative with his/her tutor.
What do I do if my tutee won’t cooperate with me?

Set expectations
At the start of the tutorial, the tutor needs to set expectations with the student. The tutee needs to understand that just because he or she may not be in a traditional classroom, he/she is still expected to listen carefully and complete the tasks as assigned to them. The student may feel free to act out of line if his/her peers aren’t present, especially if the tutoring session is in his/her own home where he or she feels most comfortable and sometimes more entitled to act out. Establishing ground rules from the start can help to avoid misbehavior down the road.

Work through the behavior
Kids sometimes misbehave. Reasons can range from having a bad day at school to not getting enough sleep the night before. If a student is having difficulty in one particular subject, he or she may express his/her frustrations by acting out. If you can, try to work through the behavior when it happens. Have the student take a break. Try offering him/her a snack. Taking a few minutes for the student to recharge could be just the thing they need.

Provide an incentive
Incentives don’t have to be anything fancy. Encouragement and compliments can sometimes work wonders. Simple, but genuine praise helps. If the tutee is an elementary school level, sometimes gold stars on assignments, stickers, or even little trinkets can work to encourage good behavior. Consider setting up a point system and give the student an option of earning a reward by keeping tally. Over the course of the tutoring session if a student is listening and paying attention, he or she can receive good behavior points. If he or she misbehaves, you subtract points. Sometime as simple as keeping points could be just the incentive the student needs to be on his or her best behavior for the session.

Consult the parent
Tutoring is not babysitting. Depending on the age of the student, one or both parents should be present for the tutoring session. This doesn’t mean that the parent needs to be sitting at the table with the student and the tutor, but he or she should be in the house should the child begin to misbehave. Not only can it make the student feel more comfortable, but having a parent nearby for the session helps keep the parent in the loop. The tutor can review what was covered in the session with the parent so that he or she can continue to practice the material with the student before the next tutoring period.

http://blog.tutoringmatch.com/non-cooperative-students/?goback=%2Egde_66995_member_177006698

Not exactly for college-age tutors, but some good information.

Tuesday, October 23, 2012

Scholarly Publishing's Gender Gap

Women cluster in certain fields, according to a study of millions of journal articles, while men get more credit

The Hard Numbers Behind Scholarly Publishing's Gender Gap 2
 
Jennifer Jacquet, now a clinical assistant professor in environmental sciences at NYU, made the suggestion that kicked off the gender project, which she then worked on. The results showed that "things are getting better for women in academia," she says.

When Jennifer Jacquet first visited Carl T. Bergstrom's evolutionary-biology lab at the University of Washington last year, she was surrounded by men. Men staring at data on the 27-inch Mac Pro computer screen that takes center stage in the lab. Men talking about mathematical proofs, about a South Park episode on evolution, about their latest mountain-climbing adventures.
Interactive Data
Use an exclusive Chronicle tool to search gender differences in nearly 1,800 fields using historic data analyzed by the University of Washington's Eigenfactor Project.

Photos

Gender Gap 1
John Brockman
Jennifer Jacquet, now a clinical assistant professor in environmental sciences at NYU, made the suggestion that kicked off the gender project, which she then worked on. The results showed that "things are getting better for women in academia," she says.
Gender Gap 2
Stephen Brashear
Carl Bergstrom (left), a theoretical biologist at the U. of Washington, and Jevin West, a postdoctoral researcher there, analyzed whether the authors of two million scholarly articles were male or female.

"The lab is like visiting a fraternity," says Ms. Jacquet, who completed her postdoc at the University of British Columbia before starting this year as a clinical assistant professor in environmental studies at New York University.
Perhaps being the only woman in the lab prompted Ms. Jacquet's answer when Mr. Bergstrom, a professor of theoretical and evolutionary biology, asked her what should be done with a remarkable new trove of data. Mr. Bergstrom and Jevin D. West, a postdoc, had just gotten their hands on nearly eight million scholarly articles collected by J­STOR, a digital archiving service. They went all the way back to Isaac Newton's time.

The biologists at Washington, who specialize in using mathematical models and computer simulations to see how academic ideas flow through networks of scholars, wondered how they could use the data.

For Ms. Jacquet, the answer was clear: What did the articles and their authors show about gender differences in publishing? Were women and men equal in this fundamental coin of the academic realm, a currency that buys tenure, promotions, and career success?

To Ms. Jacquet's surprise, Mr. West and Mr. Bergstrom took her idea and ran with it. The result is the largest analysis ever done of academic articles by gender, reaching across hundreds of years and hundreds of fields. "This has never been done on this scale before," says Mr. West.

Although the percentage of female authors is still less than women's overall representation within the full-time faculty ranks, the researchers found that the proportion has increased as more women have entered the professoriate. They also show that women cluster into certain subfields and are somewhat underrepresented in the prestigious position of first author. In the biological ­sciences, women are even more underrepresented as last author. The last name on a scientific article is typically that of the senior scholar, who is not necessarily responsible for doing most of the research or writing but who directs the lab where the experiment was based.

"What we've done is assemble this huge collection of data across many of the major scientific, social-science, and humanities fields, providing a new lens for looking at how gender plays out in scholarly authorship," says Mr. West. "But this also provides a platform for further research to be done. We don't want this to stop here."

Publication Can Mean a Job

Scholarly publishing, more than anything else, is the measuring stick of professors' research productivity. In the humanities, it's usually the monograph. But in the hard sciences and in many social sciences, it's journal articles.

To be hired on the tenure track in those fields by a top research university, young scholars increasingly must have publications on their CV's by the time they finish their doctoral degrees. And once they are hired, more publications in leading journals typically are required to be promoted at every step along the way to full professor.

"When I went to get my first job, I had a sole-authored paper in a very good journal," says Shelley J. Correll, who started her academic career as an assistant professor of sociology at the University of Wisconsin at Madison in 2001. "I think that article is what undoubtedly got me that first job. And it's even more common now for students to have that."

Ms. Correll is now a professor of sociology at Stanford University and director of its Michelle R. Clayman Institute for Research on Gender. Her name will appear fourth on an article that Mr. Bergstrom's lab hopes to publish on the findings, behind Mr. West (who is first), Ms. Jacquet, and Molly M. King, one of Ms. Correll's graduate students. Fifth is Theodore C. Bergstrom, an economist at the University of California at Santa Barbara, and last is his son, the University of Washington's Mr. Bergstrom.

Author order is very important. "If I were to give people a vita of two people who had the exact same number of publications and one person was first author on a lot of papers and the other had publications in the same journals but was second through fourth author, I guarantee you people will prefer first," says Ms. Correll.

So negotiating author order becomes crucial. But women may not be as confident and have as much experience as men with those negotiations. "If I'm writing with a man, he may be more likely to insist he be first," Ms. Correll says. "When women negotiate in general, they are less likely to be successful. People don't consider their requests as legitimate."

That made names and gender the first order of business for Mr. Bergstrom's lab.

First they created an algorithm to label the millions of JSTOR papers by field and subfield. Then the trick was to figure out whether an author was male or female. The lab consulted data on birth names collected by the Social Security Administration. If a name was used at least 95 percent of the time for a female, they coded it female, and the same for a male. If use of the name was more ambiguous, they threw the paper out.

Of the eight million articles the group started with, it ended up analyzing two million—written by 2.7 million scholars—whose author composition was similar to the whole. Roughly half were published between 1665 and 1989, and the other half between 1990 and 2010. Included in the database are papers in the hard sciences, the social sciences, law, history, philosophy, and education. Missing from the JSTOR data are articles in engineering, English, foreign languages, and physics.

The data show that over the entire 345 years, 22 percent of all authors were female. (Even though few papers in the JSTOR archive originated in the first 100 years, the researchers still felt that examining the entire data set was worthwhile.) The data also show that women were slightly less likely than that to be first author: About 19 percent of first authors in the study were female. Women were more likely to appear as third, fourth, or fifth authors.

According to the data in just the most recent time period, it is clear that the proportion of female authors over all is rising. From 1990 to 2010, the percentage of female authors went up to 27 percent. In 2010 alone, the last year for which full figures are available, the proportion had inched up to 30 percent. "The results show us what a lot of people have been saying and many of my female colleagues have been feeling," says Ms. Jacquet. "Things are getting better for women in academia."

Women still are not publishing, though, in the same proportion as they are present in academe as professors. The same year that the share of female authors in the study reached 30 percent, women made up 42 percent of all full-time professors in academe and about 34 percent of all those at the most senior levels of associate and full professor, according to the American Association of University Professors.

As the proportion of female authors over all has grown, the biologists' study found, so has the percentage of women as first authors. In fact, by 2010 about the same proportion of women were first authors as were authors in general—about 30 percent.

But those gains have not been mirrored in the last-author position, which is of particular importance in the biological sciences. According to the data, in 2010 only about 23 percent of last authors over all were female. In molecular and cell biology, women represented almost 30 percent of authorships from 1990 to 2010, but only 16.5 percent of last authors. And over that same time period in ecology and evolution, women represented nearly 23 percent of authors but only 18.5 percent of last authors.

"The gap between women as first authors and women as last authors is actually growing, which suggests that women in scientific fields are allowed to have ideas and do most of the work on a paper, but do not yet have the big grants and labs full of students and postdocs that would establish them in the prestigious last-author position," says Ms. Jacquet.

A peek beneath the surface, going from field down to subfield, also reveals hidden disparities. For example, the database shows that over the final 20 years covered by the study, about 23 percent of authors writing in ecology and evolution have been female. But in some subfields of those disciplines, the proportion of women authors was even smaller, including just 19.5 percent in herpetology and 16.6 percent in paleontology.

The same phenomenon exists in economics: Between 1990 and 2010, 13.7 percent of authors in the database were female. In some subfields, though, the proportion of female authors was even smaller. In macroeconomics, for instance, women represented just under 10 percent of authors. In a subfield labeled "household decision-making," however, the proportion of female authors shot way up, to 30 percent.

What's notable, says Mr. Bergstrom, is the way some of those differences mirror gender-role stereotypes. For instance, in sociology, where the gender breakdown was more even—women represented 41.4 percent of authors since 1990, men 58.6 percent—things look very different in the subfield of criminology where men made up 74 percent of all authors. On the other hand, women were much more heavily represented in subfields including "sex roles," where females accounted for 56 percent of the authors, and in the study of aging parents, where women constituted 62 percent.

"Sociology represents a domain where we think of things being more even because of the overall gender ratios," says Mr. Bergstrom. "But once you get in, for whatever reason you see we stratify ourselves. And we do it in ways that are loosely consistent with our common sense, the stereotypes about who does what."

Why Not 50-50?

Women's progress in academe has long been a hot topic, not least the debate over why women publish less than men do. Female professors are more likely to emphasize quality over quantity, some scholars argue, turning out fewer but meatier pieces than do their male colleagues, who are more apt to increase their productivity by publishing their work in more-frequent chunks.

In addition, studies show that women spend less time on research and more time on teaching and committee work. And it is often research and publishing, which require sustained attention, that suffer when women devote time to caring for young children.

As more women earn Ph.D.'s and take faculty jobs, though, and as the gap between the number of women and men in academe narrows, scholars have begun thinking about whether anything can or should be done about gender-based differences that remain in publishing, hiring, promotion, and pay. Do those differences result from choices women make, scholars wonder, or from discrimination?

Where professors come down on that question influences how they interpret the data on scholarly publishing from Mr. Bergstrom's lab. Cheryl Geisler, dean of the Faculty of Communication, Art, and Technology at Simon Fraser University, in British Columbia, says the fact that women do not account for at least half of the professors in most fields, and half of those writing scholarly articles, doesn't necessarily mean that women face discrimination. But "if it's not 50-50," she says, "we should at least ask, Why not?" Authorship data showing that some subfields are even more male or female than the average is "like the canary in the coal mine" and deserves further study, she says.

Some academics argue that gender clustering in subfields can skew the results of scholarship. "Just as when you had primarily Europeans writing history, you had a certain version, the same thing happens when women begin to write about crime and punishment," says Anita Levy, associate secretary of the AAUP. "There may be a way of thinking about it, or women who may see fit to look at different factors that may not have occurred to a scholar who is male."

Ann Mari May, a professor of economics at the University of Nebraska at Lincoln, has done research showing that gender influences the way economists view social policy. She sent surveys to 400 members of the American Economic Association who hold doctorates in economics from American universities. Of the 143 who responded, she found that the women among them were 24 percentage points more likely than the men to believe that the size of the U.S. government is either about right, too small, or much too small, and 21 points likelier to reject the idea that the United States suffers from excessive government regulations.

John J. Siegfried, a professor emeritus of economics at Vanderbilt University who has worked with the economic association, says there is not much that academe can do about gender imbalances, short of forcing women into subfields that may not interest them. "What's the solution?" he asks. "Should I tell women starting a Ph.D. that they should only study finance or econometrics?"

Wendy M. Williams is a professor of human development at Cornell University who studies the role of women in science, including scholarly publishing. She says the Bergstrom data on gender and authorship don't necessarily show discrimination in academic publishing, even though women cluster in some subfields and publish at lower rates than either their male counterparts or their presence in academe in general.

"The international literature show that when women submit work, there is no bias in it being accepted, but the likelihood of women submitting work may be lower," she says. "If a woman is interested in a field, but she has to devote time to three kids, she may not be submitting as often. I don't see that as discrimination."

Just last month, though, a new study released by Yale University suggested that gender bias remains a factor in academe. It found that science professors at six major research universities were likely to rate male job candidates as more qualified than female candidates to be hired as laboratory managers, even though the study assigned the hypothetical male and female applicants identical qualifications.

Ms. Jacquet says she was surprised to find that her own experience as a female scientist was not reflected by women in the Washington study of the JSTOR archives. At the time she visited Mr. Bergstrom's lab last year, she had published 10 scholarly articles and was the first author on all of them. Her fellow postdoc Mr. West, she noticed, had published a few more in total but was the second author on five. Although she knew that being first author was important, Ms. Jacquet also wondered whether it was because she is female that she had had to take the initiative in publishing all of her own scholarly articles, while Mr. West had been asked to collaborate on several peer-reviewed articles.

But the data, she says, show that female professors in the study actually were more likely to be second through fourth authors than first. It knocked down her theory that male scientists had failed to ask her to collaborate on academic articles because she is a woman. Since she first visited Mr. Bergstrom's lab, in fact, she has published three academic articles on which she is not the lead author. The article on gender and authorship will be her fourth.

"For me," she says, "this really showed the beauty of science, that you can have this personal experience that isn't reflected in big data."

Correction (10/22/2012, 3:36 p.m.): This article originally misstated Ms. Jacquet's authorship status on 10 scholarly articles she had published as of last year. She was the first author on all 10, not the only author. The article has been updated to reflect this correction.

http://chronicle.com/article/The-Hard-Numbers-Behind/135236/

Friday, October 5, 2012

10 Emerging Education Technologies You Should Know About

With educational technology, social networking and agile apps are all the rage. Whether it’s a group of students collaborating on a project or a research team seeking out resources around the globe, today’s EdTech essentials are all about keeping in touch. The emerging products, companies and high-tech tools on our list are all designed to make life easier for online teachers, students and researchers. Take a look.
  • Knowledge Transmission: This Cambridge-based team aims to deliver the best social learning experience in the world. To do that, its developers are hard at work on mobile products, like Kigo Apps, designed to prepare students for TOEIC practice tests. The company creates a number of digital products for online learning and boasts a powerful back-list conversion, which makes it simple for you to bring many of your low-tech tools into the digital age.
  • Scholrly: Where do you go when you’re looking for solid research? Scholrly founders hope you choose their brand-new search site. Co-founder Corbin Pon says, “When we talk about neighborhoods, we know that there are communities of related research that are not always easy to see and explore.” The creators of the engine, currently in beta, hopes to revolutionize the way scholars and garage inventors alike find data.
  • Instructure: Online learning veterans know that organizing a web-based classroom requires a complex system. That’s why Instructure created Canvas, a features-rich platform featuring a speed grader, an online testing manager and other simple-yet-powerful pedagogy tools. Based in Salt Lake City, this Learning Management System (LMS) newcomer is led by CEO Josh Coates and co-founders Brian Whitmer and Devlin Daley.
  • Skitch by Evernote: You already love the note-taking powerhouse. Now meet Skitch, the sketching tool that makes it simple to make your point using built-in arrows, shapes and quick sketches. The tool moves flawlessly from phone to desktop to tablet. Evernote’s team — including CEO Phil Libin and founder Stepan Pachikov — are banking that their products will help the world communicate easier.
  • Desire2Learn: Simple meets sophisticated; that’s the Desire2Learn philosophy. Their Learning Suite 10 offers an intuitive user interface, beautiful course homepages and an easy way to make podcasts and downloadable presentations. Desire2Learn was founded in 1999 and — through partnerships with companies like IBM and Adobe — aspires to stand at the forefront of advance educational technology for years to come.
  • Udacity: How many robotics engineers does it take to reinvent education? At Udacity, the answer is three: David Stavens, Mike Sokolsky and Sebastian Thrun, who use their unique backgrounds to think big with distance learning. (How big? Think 200,000 students per class.) The system includes Google’s moderator service, which allows students to vote on the best questions for instructors to answer.
  • The SNAC Project: Believe it or not, there were social networking sites before Facebook. Their remnants — newspapers, corporate publications and personal histories — are scattered across manuscript archives and libraries around the world. The Social Networks and Archival Context Project hopes to change that, creating methods and tools for matching and combining records, creating timeline-map histories, accommodating languages other than English and more. Before long, you could find the menu from a picnic in 1950s Idaho without leaving your deck chair.
  • Mendeley: Manage your research, collaborate with other academics and bring your bibliography online with the tool designed to make life easier for grad students and professors alike. The site, co-founded by Dr. Victor Henning, Jan Reichelt and Paul Föckler, already boasts over 1.7 million users and above 242 million documents. And, unlike EndNote and RefWorks, Mendeley’s basic software package is free.
  • Moodle: This user-friendly course management system from Australia is designed for both purely online schooling and blended courses. Moodle is open source, and volunteers take charge of much of the development process. The end product is easy to customize for large and small courses alike. Martin Dougiamas, the creator of Moodle, thinks of his LMS as an infinitely-customizable Lego-world for educators.
  • SlideShark: Love your iPad, but hate having to switch to a laptop in order to view presentations? SlideShark offers an elegant solution. The free app retains the fonts, colors and graphics of your PowerPoint presentations, allowing you to show on the go and work anywhere. The app is made by Brainshark, a company founded in 1999 by Joe Gustafson and designed to change the way students and businesses work on the road. SlideShark looks to be the perfect solution for online college presentations on mobile devices.
The tools above have one important thing in common: they’re all designed to evolve and adapt with emerging technology and shifting student and teacher need. You’ll find new innovations in the EdTech space every day, but it’s safe to say that the minds behind our list are a good place to watch for the next generation of smarter schools.

This is a post from our partners at Online Schools.

http://edudemic.com/2012/09/10-emerging-education-technologies/?goback=%2Egde_66995_member_170396730

Tuesday, September 25, 2012

474: Back to School

http://www.thisamericanlife.org/radio-archives/episode/474/back-to-school?act=1

Ira talks with Paul Tough, author of the new book How Children Succeed, about the traditional ways we measure ability and intelligence in American schools. They talk about the focus on cognitive abilities, conventional "book smarts." They discuss the current emphasis on these kinds of skills in
 
American education, and the emphasis standardized testing, and then turn our attention to a growing body of research that suggests we may be on the verge of a new approach to some of the biggest challenges facing American schools today. Paul Tough discusses how “non-cognitive skills” — qualities like tenacity, resilience, impulse control — are being viewed as increasingly vital in education, and Ira speaks with economist James Heckman, who’s been at the center of this research and this shift.
 
Doctor Nadine Burke Harris weighs in to discuss studies that show how poverty-related stress can affect brain development, and inhibit the development of non-cognitive skills. We also hear from a teenager named Kewauna Lerma, who talks about her struggles with some of the skills discussed, like restraint and impulse control.

We then turn to the question of what can schools can offer to kids like Kewauna, and whether non-cognitive skills are something that can be taught. Paul discusses research that suggests these kinds of skills can indeed be learned in a classroom, even with young people, like Kewauna, facing especially adverse situations, and also the success of various programs that revolve around early interventions. Ira reports on a mother and daughter in Chicago, Barbara and Aniya McDonald, who have been working with a program designed to help them improve their relationship — and ultimately to put Aniyah in a strong position to learn non-cognitive skills. (38 1/2 minutes)

Thanks, Lynn, for this great resource!

Friday, September 14, 2012

The Machines Are Taking Over

The Education Issue

The Machines Are Taking Over

Illustration by Tim Enthoven
Neil Heffernan was listening to his fiancĂ©e, Cristina Lindquist, tutor one of her students in mathematics when he had an idea. Heffernan was a graduate student in computer science, and by this point — the summer of 1997 — he had been working for two years with researchers at Carnegie Mellon University on developing computer software to help students improve their skills. But he had come to believe that the programs did little to assist their users. They were built on elaborate theories of the student mind — attempts to simulate the learning brain. Then it dawned on him: what was missing from the programs was the interventions teachers made to promote and accelerate learning. Why not model a computer program on a human tutor like Lindquist?
 
Over the next few months, Heffernan videotaped Lindquist, who taught math to middle-school students, as she tutored, transcribing the sessions word for word, hoping to isolate what made her a successful teacher. A look at the transcripts suggests the difficulties he faced. Lindquist’s tutoring sessions were highly interactive: a single hour might contain more than 400 lines of dialogue. She asked lots of questions and probed her student’s answers. She came up with examples based on the student’s own experiences. She began sentences, and her student completed them. Their dialogue was anything but formulaic.

Lindquist: Do you know how to calculate average driving speed?
Student: I think so, but I forget.
Lindquist: Well, average speed — as your mom drove you here, did she drive the same speed the whole time?
Student: No.
Lindquist: But she did have an average speed. How do you think you calculate the average speed?
Student: It would be hours divided by 55 miles.
Lindquist: Which way is it? It’s miles per hour. So which way do you divide?
Student: It would be 55 miles divided by hours.

As the session continued, Lindquist gestured, pointed, made eye contact, modulated her voice. “Cruising!” she exclaimed, after the student answered three questions in a row correctly. “Did you see how I had to stop and think?” she inquired, modeling how to solve a problem. “I can see you’re getting tired,” she commented sympathetically near the end of the session. How could a computer program ever approximate this?

In a 1984 paper that is regarded as a classic of educational psychology, Benjamin Bloom, a professor at the University of Chicago, showed that being tutored is the most effective way to learn, vastly superior to being taught in a classroom. The experiments headed by Bloom randomly assigned fourth-, fifth- and eighth-grade students to classes of about 30 pupils per teacher, or to one-on-one tutoring. Children tutored individually performed two standard deviations better than children who received conventional classroom instruction — a huge difference.

Affluent American parents have since come to see the disparity Bloom identified as a golden opportunity, and tutoring has ballooned into a $5 billion industry. Among middle- and high-school students enrolled in New York City’s elite schools, tutoring is a common practice, and the most sought-after tutors can charge as much as $400 an hour.

But what of the pupils who could most benefit from tutoring — poor, urban, minority? Bloom had hoped that traditional teaching could eventually be made as effective as tutoring. But Heffernan was doubtful. He knew firsthand what it was like to grapple with the challenges of the classroom. After graduating from Amherst College, he joined Teach for America and was placed in an inner-city middle school in Baltimore. Some of his classes had as many as 40 students, all of them performing well below grade level. Discipline was a constant problem. Heffernan claims he set a school record for the number of students sent to the principal’s office. “I could barely control the class, let alone help each student,” Heffernan told me. “I wasn’t ever going to make a dent in this country’s educational problems by teaching just a few classes of students at a time.”

Heffernan left teaching, hoping that some marriage of education and technology might help “level the playing field in American education.” He decided that the only way to close the persistent “achievement gap” between white and minority, high- and low-income students was to offer universal tutoring — to give each student access to his or her own Cristina Lindquist. While hiring a human tutor for every child would be prohibitively expensive, the right computer program could make this possible.
So Heffernan forged ahead, cataloging more than two dozen “moves” Lindquist made to help her students learn (“remind the student of steps they have already completed,” “encourage the student to generalize,” “challenge a correct answer if the tutor suspects guessing”). He incorporated many of these tactics into a computerized tutor — called “Ms. Lindquist” — which became the basis of his doctoral dissertation. When he was hired as an assistant professor at Worcester Polytechnic Institute in Massachusetts, Heffernan continued to work on the program, joined in his efforts by Lindquist, now his wife, who also works at W.P.I. Together they improved the tutor, which they renamed ASSISTments (it assists students while generating an assessment of their progress). Seventeen years after Heffernan first set up his video camera, the computerized tutor he designed has been used by more than 100,000 students, in schools all over the country. “I look at this as just a start,” he told me. But, he added confidently, “we are closing the gap with human tutors.”

Grafton Middle School, a public school in a prosperous town a few miles outside Worcester, has been using ASSISTments since 2010. Last spring, I visited the home of Tyler Rogers, a tall boy with reddish-blond hair who was just finishing seventh grade at Grafton and who used the program to do his math homework. (While ASSISTments has made a few limited forays into tutoring other subjects, it is almost entirely dedicated to teaching math.) His teachers described him as “conscientious” and “mature,” but he had struggled in his pre-algebra class that year. “Sometime last fall, it started to get really hard,” he said as he opened his laptop.

Tyler breezed through the first part of his homework, but 10 questions in he hit a rough patch. “Write the equation in function form: 3x-y=5,” read the problem on the screen. Tyler worked the problem out in pencil first and then typed “5-3x” into the box. The response was instantaneous: “Sorry, wrong answer.” Tyler’s shoulders slumped. He tried again, his pencil scratching the paper. Another answer — “5/3x” — yielded another error message, but a third try, with “3x-5,” worked better. “Correct!” the computer proclaimed.

ASSISTments incorporates many of the findings made by researchers who, spurred by the 1984 Bloom study, set out to discover what tutors do that is so helpful to student learning. First and foremost, they concluded, tutors provide immediate feedback: they let students know whether what they’re doing is right or wrong. Such responsiveness keeps students on track, preventing them from wandering down “garden paths” of unproductive reasoning.

The second important service tutors provide, researchers discovered, is guiding students’ efforts, offering nudges in the right direction. ASSISTments provides this, too, in the form of a “hint” button. Tyler chose not to use it that evening, but if he had, he would have been given a series of clues to the right answer, “scaffolded” to support his own problem-solving efforts. For the answer “5-3x,” the computer responded: “You need to take a closer look at your signs. Notice there is a minus in front of the ‘y.’ ”

Tyler’s father, Chris Rogers, who manages complex networks of computers for a living, is pleased that his son’s homework employs technology. “Everyone works with computers these days,” he told me later. “Tyler might as well get used to using them now.” But his mother, Andrea, is more skeptical. Andrea is studying for a master’s in education and plans to become an elementary-school teacher. She is not opposed to the use of educational technology, but she objects to the flat affect of ASSISTments. In contrast to a human tutor, who has a nearly infinite number of potential responses to a student’s difficulties, the program is equipped with only a few. If a solution to a problem is typed incorrectly — say, with an extra space — the computer stubbornly returns the “Sorry, incorrect answer” message, though a human would recognize the answer as right. “In the beginning, when Tyler was first learning to use ASSISTments, there was a lot of frustration,” Andrea says. “I would sit there with him for hours, helping him. A computer can’t tell when you’re confused or frustrated or bored.”

Heffernan, as it happens, is working on that. Dealing with emotion — helping students regulate their feelings, quelling frustration and rousing flagging morale — is the third important function that human tutors fulfill. So Heffernan, along with several researchers at W.P.I. and other institutions, is working on an emotion-sensitive tutor: a computer program that can recognize and respond to students’ moods. One of his collaborators on the project is Sidney D’Mello, an assistant professor of psychology and computer science at the University of Notre Dame.

“The first thing we had to do is identify which emotions are important in tutoring, and we found that there are three that really matter: boredom, frustration and confusion,” D’Mello said. “Then we had to figure out how to accurately measure those feelings without interrupting the tutoring process.” His research has relied on two methods of collecting such data: applying facial-expression recognition software to spot a furrowed brow or an expression of slack disengagement; and using a special chair with posture sensors to tell whether students are leaning forward with interest or lolling back in boredom. Once the student’s feelings are identified, the thinking goes, the computerized tutor could adjust accordingly — giving the bored student more challenging questions or reviewing fundamentals with the student who is confused.

Of course, as D’Mello puts it, “we can’t install a $20,000 butt-sensor chair in every school in America.” So D’Mello, along with Heffernan, is working on a less elaborate, less expensive alternative: judging whether a student is bored, confused or frustrated based only on the pattern of his or her responses to questions. Heffernan and a collaborator at Columbia’s Teachers College, Ryan Baker, an expert in educational data mining, determined that students enter their answers in characteristic ways: a student who is bored, for example, may go for long stretches without answering any problems (he might be talking to a fellow student, or daydreaming) and then will answer a flurry of questions all at once, getting most or all correct. A student who is confused, by contrast, will spend a lot of time on each question, resort to the hint button frequently and get many of the questions wrong.
“Right now we’re able to accurately identify students’ emotions from their response patterns at a rate about 30 percent better than chance,” Baker says. “That’s about where the video cameras and posture sensors were a few years ago, and we’re optimistic that we can get close to their current accuracy rates of about 70 percent better than chance.” Human judges of emotion, he notes, reach agreement on what other people are feeling about 80 percent of the time.

Heffernan is also experimenting with ways that computers can inject emotion into the tutoring exchange — by flashing messages of encouragement, for example, or by calling up motivational videos recorded by the students’ teachers. The aim, he says, is to endow his computerized tutor “with the qualities of humans that help other humans learn.”

But is humanizing computers really the best way to supply students with effective tutors? Some researchers, like Ken Koedinger, a professor of human-computer interaction and psychology at Carnegie Mellon University, take a different view from Heffernan’s: computerized tutors shouldn’t try to emulate humans, because computers may well be the superior teachers. Koedinger has been working on computerized tutors for almost three decades, using them not only to help students learn but also to collect data about how the learning process works. Every keystroke a student makes — every hesitation, every hint requested, every wrong answer — can be analyzed for clues to how the mind learns. A program Koedinger helped design, Cognitive Tutor, is currently used by more than 600,000 students in 3,000 school districts around the country, generating a vast supply of data for researchers to mine. (The program is owned by a company called Carnegie Learning, which was sold to the Apollo Group last year for $75 million; Apollo also owns the for-profit University of Phoenix.)

Koedinger is convinced that learning is so unfathomably complex that we need the data generated by computers to fully understand it. “We think we know how to teach because humans have been doing it forever,” he says, “but in fact we’re just beginning to understand how complicated it is to do it well.”
As an example, Koedinger points to the spacing effect. Decades of research have demonstrated that people learn more effectively when their encounters with information are spread out over time, rather than massed into one marathon study session. Some teachers have incorporated this finding into their classrooms — going over previously covered material at regular intervals, for instance. But optimizing the spacing effect is a far more intricate task than providing the occasional review, Koedinger says: “To maximize retention of material, it’s best to start out by exposing the student to the information at short intervals, gradually lengthening the amount of time between encounters.” Different types of information — abstract concepts versus concrete facts, for example — require different schedules of exposure. The spacing timetable should also be adjusted to each individual’s shifting level of mastery. “There’s no way a classroom teacher can keep track of all this for every kid,” Koedinger says. But a computer, with its vast stores of memory and automated record-keeping, can. Koedinger and his colleagues have identified hundreds of subtle facets of learning, all of which can be managed and implemented by sophisticated software.

Yet some educators maintain that however complex the data analysis and targeted the program, computerized tutoring is no match for a good teacher. It’s not clear, for instance, that Koedinger’s program yields better outcomes for students. A review conducted by the Department of Education in 2010 concluded that the product had “no discernible effects” on students’ test scores, while costing far more than a conventional textbook, leading critics to charge that Carnegie Learning is taking advantage of teachers and administrators dazzled by the promise of educational technology. Koedinger counters that “many other studies, mostly positive,” have affirmed the value of the Carnegie Learning program. “I’m confident that the program helps students learn better than paper-and-pencil homework assignments.”

Heffernan isn’t susceptible to the criticism that he is profiting from school districts, because he gives ASSISTments away free. And so far, the small number of preliminary, peer-reviewed studies he has conducted on his program support its value: one randomized controlled trial found that the use of the computerized tutor improved students’ performance in math by the equivalent of a full letter grade over the performance of pupils who used paper and pencil to do their homework.

But Heffernan does face one serious hurdle: any student who wishes to use ASSISTments needs a computer and Internet access. More than 20 percent of U.S. households are not equipped with a computer; about 30 percent have no broadband connection. Heffernan originally hoped to try ASSISTments out in Worcester’s mostly urban school district, but he had to scale back the program when he found that few students were consistently able to use a computer at home. So ASSISTments has mainly been adopted by affluent suburban schools like Grafton Middle School and Bellingham Memorial Middle School in Massachusetts — populated, Heffernan said ruefully, by students who already have the advantages of high-functioning schools and educated, involved parents. But, he told me brightly, he recently received a grant from the Department of Education to supply ASSISTments to almost 10,000 public-school students in Maine — a largely poor, largely rural state in which all schoolchildren nonetheless own a laptop, thanks to a state initiative. Heffernan hopes that by raising the Maine students’ test scores with ASSISTments, he will inspire more officials in states around the country to see the virtue of making tutoring universal.

The morning after I watched Tyler Rogers do his homework, I sat in on his math class at Grafton Middle School. As he and his classmates filed into the classroom, I talked with his teacher, Kim Thienpont, who has taught middle school for 10 years. “As teachers, we get all this training in ‘differentiated instruction’ — adapting our teaching to the needs of each student,” she said. “But in a class of 20 students, with a certain amount of material we have to cover each day, how am I really going to do that?”

ASSISTments, Thienpont told me, made this possible, echoing what I heard from another area math teacher, Barbara Delaney, the day before. Delaney teaches sixth-grade math in nearby Bellingham. Each time her students use the computerized tutor to do their homework, the program collects data on how well they’re doing: which problems they got wrong, how many times they used the hint button. The information is automatically collated into a report, which is available to Delaney on her own computer before the next morning’s class. (Reports on individual students can be accessed by their parents.) “With ASSISTments, I know none of my students are falling through the cracks,” Delaney told me.

After completing a few warm-up problems on their school’s iPod Touches­, the students turned to the front of the room, where Thienpont projected a spreadsheet of the previous night’s homework. Like stock traders going over the day’s returns, the students scanned the data, comparing their own grades with the class average and picking out the problems that gave their classmates trouble. (“If you got a question wrong, but a lot of other people got it wrong, too, you don’t feel so bad,” Tyler explained.)
Thienpont began by going over “common wrong answers” — incorrect solutions that many students arrived at by following predictable but mistaken lines of reasoning. Or perhaps, not so predictable.

“Sometimes I’m flabbergasted by the thing all the students get wrong,” Thienpont said. “It’s often a mistake I never would have expected.” Human teachers and tutors are susceptible to what cognitive scientists call the “expert blind spot” — once we’ve mastered a body of knowledge, it’s hard to imagine what novices don’t know — but computers have no such mental block. Highlighting “common wrong answers” allows Thienpont to address shared misconceptions without putting any one student on the spot.

I saw another unexpected effect of computerized tutoring in Delaney’s Bellingham classroom. After explaining how to solve a problem that many got wrong on the previous night’s homework, Delaney asked her students to come up with a hint for the next year’s class. Students called out suggested clues, and after a few tries, they arrived at a concise tip. “Congratulations!” she said. “You’ve just helped next year’s sixth graders learn math.” When Delaney’s future pupils press the hint button in ASSISTments, the former students’ advice will appear.

Unlike the proprietary software sold by Carnegie Learning, or by education-technology giants like Pearson, ASSISTments was designed to be modified by teachers and students, in a process Heffernan likens to the crowd-sourcing that created Wikipedia. His latest inspiration is to add a button to each page of ASSISTments that will allow students to access a Web page where they can get more information about, say, a relevant math concept. Heffernan and his W.P.I. colleagues are now developing a system of vetting and ranking the thousands of math-related sites on the Internet.

For all his ambition, Heffernan acknowledges that this technology has limits. He has a motto: “Let
computers do what computers are good at, and people do what people are good at.” Computers excel in following a precise plan of instruction. A computer never gets impatient or annoyed. But it never gets excited or enthusiastic either. Nor can a computer guide a student through an open-ended exploration of literature or history. It’s no accident that ASSISTments and other computerized tutoring systems have focused primarily on math, a subject suited to computers’ binary language. While a computer can emulate, and in some ways exceed, the abilities of a human teacher, it will not replace her. Rather, it’s the emerging hybrid of human and computer instruction — not either one alone — that may well transform education.

Near the end of my visit to Worcester, I told Heffernan about a scene I witnessed in Barbara Delaney’s class. She had divided her sixth graders into what she called “flexible groups” — groupings of students by ability that shift daily depending on the data collected in her ASSISTments report. She walked over to the group that struggled the most with the previous night’s homework and talked quietly with one girl who looked on the brink of tears. Delaney pointed to the girl’s notebook, then to the ASSISTments spreadsheet projected on a “smart” board at the front of the room. She touched the girl’s shoulder; the student lifted her face to her teacher and managed a crooked smile.

When I finished recounting the incident, Heffernan sat back in his chair. “That’s not anything we put into the tutoring system — that’s something Barbara brings to her students,” he remarked. “I wish we could put that in a box.”

Annie Murphy Paul is the author of “Origins: How the Nine Months Before Birth Shape the Rest of Our Lives” and is at work on a book about the science of learning.

Editor: Sheila Glaser

http://www.nytimes.com/2012/09/16/magazine/how-computerized-tutors-are-learning-to-teach-humans.html?_r=1&pagewanted=all

Monday, August 27, 2012

How Universities Treat Adjuncts Limits Their Effectiveness in the Classroom, Report Says


Colleges that want to set the stage for their students to succeed should stop hiring adjunct professors at the last minute and then denying those instructors access to the technology and resources they need to teach effectively, a new report suggests.

"The 'just in time' staffing model is unjust for faculty and for students and clearly compromises education quality," says the 26-page policy report from the Center for the Future of Higher Education, a virtual think tank of the Campaign for the Future of Higher Education. (The center plans to post the report on its Web site on Thursday.)

Contingent faculty members who are hired just before the start of an academic term can opt to prepare for their classes while they're not on the payroll or resign themselves to teach courses for which they're not adequately prepared, the report says. Add a lack of access to personal office space, computers, library resources, and curriculum guidelines, among other things, and "the education experience of students suffers, both inside and outside of the classroom," it says.

The report is based on the findings of an online survey of 500 contingent faculty members conducted last fall by the New Faculty Majority Foundation, the research arm of the advocacy group New Faculty Majority.

"Faculty working conditions are student learning conditions, but we realize that people don't get that connection," said Maria Maisto, who is president of the New Faculty Majority and a co-author of the report. "We wanted to take faculty working conditions and really connect them to student learning. We need to really explain how those conditions shortchange students."

3-Day Notice

The report takes its title, "Who Is Professor 'Staff,'" from the generic way adjunct professors are listed on course schedules. Its subtitle continues, "And How Can This Person Teach So Many Classes?"—a refrain the report says reflects the confusion students feel while looking at their class schedules.
Ms. Maisto is one of two well-known contingent faculty members who are among the report's authors. The other is Steve M. Street, a longtime creative-writing and literature instructor who died of cancer last week. The report is dedicated to Mr. Street.

Esther S. Merves, director of research and special programs for the New Faculty Majority Foundation and an adjunct at George Washington University, and Gary D. Rhoades, director of the Campaign for the Future of Higher Education, are also co-authors.

The survey described in the report asked contingent faculty members about hiring procedures and working conditions. Roughly three-fourths of the 500 respondents teach part time.
Asked about the courses for which they got the most lead time, 17 percent of the respondents said they had received less than two weeks' notice between being hired and the start of the class, while 18 percent said they had received between two and three weeks' notice. Asked about appointments for which they had the least lead time, 38 percent had less than two weeks' notice, and 25 percent had between two and three weeks' notice.

For some, it was far less: "I teach several classes online as well and those classes typically give me about a three-day notice," said one survey respondent quoted in the report.

The report also paints a bleak picture of adjunct faculty members' ability to tap instructional resources. In describing appointments that gave them the most access, 34 percent said they didn't receive sample syllabi until less than two weeks before classes started and 21 percent never got access to office space. In the worst-case settings, the report says, 41 percent of respondents had no access to a campus phone.
Ms. Maisto said adjunct faculty members often bear the financial costs related to lack of access, and the survey showed that they work hard to shield their students from any ill effects that might stem from their professors' work conditions.

Working Temporarily, for Decades

The New Faculty Majority Foundation wants administrators and others to use its survey tool to collect data that will "make transparent" the hiring and employment practices of contingent faculty.

The report also sharply questions whether administrators really need the flexibility they say hiring adjunct faculty provides them with. "How can you call someone temporary when they've been working at the same institution for decades?" Ms. Maisto said. "It's really time to unpack that and be honest. Let's talk about what kind of flexibility is really necessary."

The report is the most recent in a stream of research that has provided an inside glimpse into the problems that plague contingent faculty. In June the Coalition on the Academic Workforce released an extensive study of non-tenure-track faculty members, and last month a document from the Delphi Project on the Changing Faculty and Student Success detailed a year's worth of conversations among stakeholders from all segments of academe about the shift in the academic work force to contingent faculty, who now make up about 70 percent of all instructors on college campuses.

Among other things, the Delphi Project's report highlighted the lack of data related to the effects of that shift and offered some strategies to paint a more accurate picture of the professoriate.

Ms. Maisto said the New Faculty Majority Foundation plans to explore related issues in future papers, such as why adjuncts continue to do the work they do even though their work conditions make it difficult for them to do their best. Another area of interest: Under what circumstances do adjuncts share with students the inequities they face on the job, and what are the implications of doing so?

Correction (8/23/2012, 10:15 a.m.): The original version of this article misspelled the first name of a New Faculty Majority Foundation official and omitted mention of her teaching position. She is Esther S. Merves, not Ester, and she is also an adjunct instructor at George Washington University. The article has been updated to reflect this correction.

http://chronicle.com/article/Adjuncts-Working-Conditions/133918/