What fires you up?

We want to know. Tell us what’s on your mind, and we will share your burning questions here and invite our professors and students to weigh in.

Submit Your Question

Q:
How can we break down the systemic racism of the judicial system?
A:
GLENN MCNAIR, Professor of History

The simple, but disappointing, answer is that we cannot break down systemic racism within the judicial system until we have made significant inroads in eroding white supremacy itself.

White supremacy is defined as an ideology based on the belief that whites are superior to other races and is manifested through systems that maintain whites in superior positions.

The most cutting-edge research on implicit bias, including research conducted by Harvard’s Project Implicit, confirms that most Americans associate Blacks with crime and violence. Until fairly recently, these beliefs were held consciously and articulated publicly. Today, these beliefs are largely unconscious, but nevertheless play a significant role in decision-making processes. Accordingly, when actors within the criminal justice system make decisions, they act on these biases — even if they are not aware of them.

Bias in the criminal justice system begins with police officers; they decide who to stop and frisk or to arrest. Study after study demonstrates that police officers bring Blacks into the system with far greater frequency than whites. The process continues with prosecutors deciding whether to charge those arrested, take them to trial, or urge them to enter guilty pleas. Again, fewer whites are charged, compelled to enter guilty pleas, or sent to trial. At conviction, judges decide punishments. Blacks routinely receive harsher punishments than whites, up to and including the death penalty. Before judges can hand down sentences, juries must determine guilt or innocence. Juries convict black defendants at higher rates than white defendants.

A seemingly straightforward way of handling this problem of discretion is to eliminate it by crafting strict guidelines about what to do at each stage of the process. This has been tried since the 1970s and has failed, or has produced horrific unintended consequences. For example, mandatory-minimum sentencing was designed to ensure that all criminals convicted of particular crimes would receive similar punishments. That reform is responsible for today’s mass incarceration crisis.

In sum, we cannot make progress within the criminal justice system until we deal with white supremacy in our society as a whole. And the first step in that process is acknowledging that it is a problem, something we have been loathe to do.

Glenn McNair is a professor of history at Kenyon and former police officer and special agent within the U.S. Treasury Department.

Q:
Are we ready to commercially deploy artificial intelligence?
A:
KATHERINE ELKINS & JON CHUN, Associate Professor of Comparative Literature and Humanities and Visiting Instructor of Humanities, Affiliated Scholar in Scientific Computing

KATHERINE ELKINS | Associate Professor of Comparative Literature and Humanities JON CHUN | Visiting Instructor of Humanities, Affiliated Scholar in Scientific Computing

chess When IBM Deep Blue defeated chess champion Garry Kasparov in 1997, The New York Times reported it would take 100 years before a computer could defeat a human Go master. Go is a vastly more complex game than chess that requires computers to rely on heuristic shortcuts that mimic human intuition.

Yet it was just 20 years later when AlphaGo twice defeated Lee Sedol, an 18-time world Go champion, and became the first computer to beat a top-ranked human in a Go match.

Astonishing advances like this have led to a global “arms race” in artificial intelligence, as companies compete to acquire top AI talent. Last year, China announced a multibillion-dollar initiative to become the world leader in AI and, recently, Russian President Vladimir Putin proclaimed, “Whoever becomes the leader in this sphere will become the ruler of the world.”

While AI experts work to develop smarter AI applications for all of us — from driverless cars to personal assistants — fewer have taken up the broader challenge to ensure we don’t become, in Henry David Thoreau’s words, “the tools of our tools.”

What we need now are humanists conversant in AI who can critique and shape the future that AI may restructure. After all, AI forces us to ask questions about what it means to be human. And answering these questions will, in the end, be more important than AI milestones like AlphaGo. The only way to answer these questions is to develop an understanding of the world that is both broad and deep, since these questions cannot be answered within any single discipline or major.

No one in 1997 could have predicted the advances in big data, computational power and algorithms that are making AI increasingly powerful and inexpensive. How, then, can we predict what AI will look like 20 years from now? Even the experts are poor at forecasting this future. But the rapid and revolutionary changes being brought on by AI compel us to continue putting the human at the center of our technological world.