President Kennedy’s Speeches

Recently I was invited to speak at a dinner hosted by a Christian group at the Kennedy Museum in Dallas. They asked if I might speak about President John F. Kennedy and relate it to some of the issues we are dealing with today.

I began by asking them to imagine what might happen if we could bring President Kennedy in a time machine to our time and place. What would he think of what has happened in America?

Of course, we cannot accurately predict what he might think, but we do have his speeches that give us some insight into his perspective on the major issues in the 1960s. And as I re-read his great speeches, I think the audience concluded that they said more about the change in America than anything else.

I think it would be fair to say that President Kennedy’s speeches illustrate what was mainstream (perhaps even a bit progressive) back in the 1960s. Today (with perhaps the exception of his speech on church/state issues) most of his ideas would be considered right wing. And if I might be so bold, I think it is reasonable to say that many of the leaders of his party today would reject many of the ideas he put forward more than forty years ago.

Foreign Policy

Let’s first look at President Kennedy’s perspective on foreign policy. One of his best known speeches is his inaugural address on January 20, 1961:

Let the word go forth from this time and place, to friend and foe alike, that the torch has been passed to a new generation of Americans—born in this century, tempered by war, disciplined by a hard and bitter peace, proud of our ancient heritage—and unwilling to witness or permit the slow undoing of those human rights to which this Nation has always been committed, and to which we are committed today at home and around the world.

Let every nation know, whether it wishes us well or ill, that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty.

In his day, the great foreign policy challenge was communism. The threat from the Soviet Union, as well as Red China, was his primary focus. And he made it clear that he would bring an aggressive foreign policy to the world in order to assure the survival and success of liberty.

Today the great foreign policy challenge is international terrorism (which is a topic that President Kennedy addressed in his day). And there are still threats to America and the need to address the issue of human rights that he talked about more than forty years ago. America still needs a foreign policy that aggressively deals with terrorists who would threaten our freedom and dictators who keep whole nations in bondage.

It may surprise many to realize that more than forty years ago President Kennedy understood the threat of terrorism. Here is what he said to the General Assembly of the United Nations on September 25, 1961:

Terror is not a new weapon. Throughout history it has been used by those who could not prevail, either by persuasion or example. But inevitably they fail, either because men are not afraid to die for a life worth living, or because the terrorists themselves came to realize that free men cannot be frightened by threats, and that aggression would meet its own response. And it is in the light of that history that every nation today should know, be he friend or foe, that the United States has both the will and the weapons to join free men in standing up to their responsibilities.

Terrorism is with us in the twenty-first century, though the terrorists today are primarily radical Muslims. And President Kennedy rightly understood the threat terrorism posed to freedom. As we just saw, he proposed an aggressive foreign policy to deal with these threats. He knew that “free men cannot be frightened by threats.”

President Kennedy also spoke to the issue of human rights. In his inaugural address on January 20, 1961, he quoted from the book of Isaiah to illustrate his point:

Let both sides unite to heed in all corners of the earth the command of Isaiah—to “undo the heavy burdens . . . and to let the oppressed go free.”

And if a beachhead of cooperation may push back the jungle of suspicion, let both sides join in creating a new endeavor, not a new balance of power, but a new world of law, where the strong are just and the weak secure and the peace preserved.

He envisioned a future world where people were not enslaved by communism and held behind an Iron Curtain or Bamboo Curtain. When he spoke in West Berlin on June 26, 1963, he addressed the importance of freedom:

Freedom is indivisible, and when one man is enslaved, all are not free. When all are free, then we can look forward to that day when this city will be joined as one and this country and this great Continent of Europe in a peaceful and hopeful globe. When that day finally comes, as it will, the people of West Berlin can take sober satisfaction in the fact that they were in the front lines for almost two decades.

All free men, wherever they may live, are citizens of Berlin, and, therefore, as a free man, I take pride in the words “Ich bin ein Berliner.”

President Kennedy saw the day when men and women on both sides of the Berlin Wall would be free.

Economic Policy

President Kennedy proposed a significant cut in taxes. Here is what he said to the Economic Club of New York on December 14, 1962:

The final and best means of strengthening demand among consumers and business is to reduce the burden on private income and the deterrents to private initiative which are imposed by our present tax system—and this administration pledged itself last summer to an across-the-board, top-to-bottom cut in personal and corporate income taxes to be enacted and become effective in 1963.

I’m not talking about a ‘quickie’ or a temporary tax cut, which would be more appropriate if a recession were imminent. Nor am I talking about giving the economy a mere shot in the arm, to ease some temporary complaint. I am talking about the accumulated evidence of the last five years that our present tax system, developed as it was, in good part, during World War II to restrain growth, exerts too heavy a drag on growth in peace time; that it siphons out of the private economy too large a share of personal and business purchasing power; that it reduces the financial incentives for personal effort, investment, and risk-taking. In short, to increase demand and lift the economy, the federal government’s most useful role is not to rush into a program of excessive increases in public expenditures, but to expand the incentives and opportunities for private expenditures.

He so believed in the need to cut taxes that he focused whole paragraphs of his 1963 State of the Union speech on the same topic. Here is one of those paragraphs:

For it is increasingly clear—to those in government, business, and labor who are responsible for our economy’s success—that our obsolete tax system exerts too heavy a drag on private purchasing power, profits, and employment. Designed to check inflation in earlier years, it now checks growth instead. It discourages extra effort and risk. It distorts the use of resources. It invites recurrent recessions, depresses our Federal revenues, and causes chronic budget deficits.

In the last few decades, many Democrat leaders have criticized President Reagan and President Bush for comparing their tax cut proposals to those of President Kennedy. But there are significant similarities. President Kennedy was not just proposing a quick fix or an economic “shot in the arm.” He saw that taxes exert “a drag on growth” in the economy. If that was true in the 1960s when the taxes on the average American were lower than today, then it is even more true today.

Church and State

Church and state was a major issue in his campaign since he was Catholic. So he chose to speak to the issue in front of the Greater Houston Ministerial Alliance on September 12, 1960:

I believe in an America where the separation of church and state is absolute; where no Catholic prelate would tell the President—should he be Catholic—how to act, and no Protestant minister would tell his parishioners for whom to vote; where no church or church school is granted any public funds or political preference, and where no man is denied public office merely because his religion differs from the President who might appoint him, or the people who might elect him.

I believe in an America that is officially neither Catholic, Protestant nor Jewish; where no public official either requests or accept instructions on public policy from the Pope, the National Council of Churches or any other ecclesiastical source; where no religious body seeks to impose its will directly or indirectly upon the general populace or the public acts of its officials, and where religious liberty is so indivisible that an act against one church is treated as an act against all.

For while this year it may be a Catholic against whom the finger of suspicion is pointed, in other years it has been—and may someday be again—a Jew, or a Quaker, or a Unitarian, or a Baptist. It was Virginia’s harassment of Baptist preachers, for example, that led to Jefferson’s statute of religious freedom. Today, I may be the victim, but tomorrow it may be you—until the whole fabric of our harmonious society is ripped apart at a time of great national peril.

We can agree with President Kennedy that religious leaders should not demand that a politician vote a certain way. But we live in the free society, so pastors should be free to express their biblical perspective on social and political issues.

That is one of the reasons Representative Walter Jones has sponsored legislation known as the “Houses of Worship Freedom of Speech Restoration Act” to make this possible. Back in 1954, then-Senator Lyndon Johnson introduced an amendment to a tax code revision that was being considered on the Senate floor. The amendment prohibited all non-profit groups—including churches—from engaging in political activity without losing their tax-exempt status. The bill by Representative Jones would return that right to churches and allow pastors and churches greater freedom to speak to these issues.

Social Issues

One issue that surfaced during Kennedy’s presidency was the subject of school prayer. In 1962, the Supreme Court issued its decision in Engel v. Vitale. This was President Kennedy’s response:

We have in this case a very easy remedy, and that is to pray ourselves. And I would think it would be a welcome reminder to every American family that we can pray a good deal more at home, we can attend our churches with a good deal more fidelity, and we can make the true meaning of prayer much more important in the lives of our children.

At the time, this may have seemed like an isolated and even necessary action by the Supreme Court. Few could have anticipated that this would be the beginning of the removal of prayer, Bible reading, and even the Ten Commandments from the classrooms of America.

So how would John F. Kennedy stand on the issue of abortion? Well, we simply don’t know, since abortion was not a major policy issue in 1963.

We do know that as a Catholic, he and the other Kennedys valued life. In the 1968 election, Robert F. Kennedy was asked about the subject of contraception. The Supreme Court handed down its decision on contraception in the case Griswold v. Connecticut in 1965, and so Bobby Kennedy was asked about his views on the subject. Kennedy at that time had ten children. He used the Kennedy wit and turned the question into a funny line. He replied, “You mean personally or as governmental policy?”

We do know that President Kennedy did nominate Byron White to the Supreme Court. It’s worth noting that he and Justice Rehnquist were the only two dissenting votes in the case of Roe v. Wade.

By the way, when Justice White left the court and President Clinton nominated Ruth Bader Ginsberg, you didn’t hear anyone in the media talk about the court shifting to the left. Byron York, writing for National Review, did a Lexis-Nexis search and did not find one major media outlet that talked about this shift. By contrast, he found sixty-three times in which the media lamented the potential shift of the court to the right with the nomination of Judge Samuel Alito.

As we have looked at some of President Kennedy’s speeches, it is amazing how much of the political dialogue has moved. But to be more precise, it is America that has moved.

It reminds you of the story of a middle-aged man and wife. One day as her husband was driving the car, she began talking about how it used to be when they first dated. They always held hands, they had long talks, and they used to sit next to each other as they drove along the countryside. Finally, she asked her husband, “Why don’t we ever sit together anymore when we drive?” He glanced over and said to her, “I’m not the one who moved.”

Reading President Kennedy’s speeches remind us that America has moved. Maybe it’s time to get back to where we belong.

© 2006 Probe Ministries


Mind, Soul, and Neuroethics

Neuroscience is the next frontier for research, and Kerby Anderson urges Christians to pay attention to these findings and provide a biblical perspective to the research and an ethical framework for its application.

Let me begin with a question. Imagine that our medical technology has advanced enough that we can transplant a human brain. If we exchanged your brain with that of another person, would you wake up in your body with someone else’s thoughts and memories? Or would you wake up in the other person’s body?

Or consider the following questions concerning brain research:

• Scientists are beginning to work on a “smart pill” that would increase your memory and intelligence. If such a pill existed, who should take it?

• Scientists are working to develop brain fingerprinting to reveal a person’s knowledge of events. If perfected, should these brain scans be used like polygraph tests to detect if people are lying?

• Pharmaceutical companies are working to develop chemicals that block the formation of memories. If perfected, should these pills also be used to erase memories that people don’t want to have?

• Areas of the brain can be stimulated or suppressed by placing a device over the scalp. Should doctors use these devices to control your brain?

These are just a few of the questions being raised in a relatively new ethical field of discussion known as neuroethics.

In the past few years, neuroscience has been making discoveries about the human brain at an incredible rate of speed. Advances in neuroscience and imaging methods have made it possible to observe the brain more directly. And advances in neurosurgery have also made it possible to intervene more precisely and effectively.

This new arena of neuroethics is beginning to deal with the hard questions about our rapidly growing knowledge of the human brain and our ethical and social responsibilities concerning this new information. Doctors, scientists, lawyers, politicians, and theologians are all interested in neuroethics. But as you can see from the above examples, the implications of these concerns should extend to all of us since we will ultimately be affected by the moral and legal decisions concerning neuroscience.

In developing a Christian perspective on neuroethics, we should begin with a proper understanding of the mind and brain. Nearly all scientific investigation begins with the a priori assumption that we are material, not spiritual. Thus, scientists assume there is only a brain and not an immaterial mind. Put another way, they assume there is only a body and not a soul.

Dualism

Are we merely a brain or are we both brain and mind? This is a fundamental question in science, philosophy, and theology. New advances in science seem to be challenging the notion that we are both mind and brain.

Most Christians are Cartesian dualists in that they believe that the soul inhabits the body. The name Cartesian dualism comes from the philosopher René Descartes who four hundred years ago argued that identity and thought were distinct. He is famous for the phrase, “I think, therefore I am.” In other words, the fact that he could think about himself showed that there was something distinct from him. He was doing something with his brain, but he was also distinct from his brain because he was having thoughts.

A quarter century ago, Probe Ministries published a book that showed that we are both mind and brain. The book, The Mysterious Matter of Mind, by Dr. Arthur C. Custance presented experimental evidence that led scientists to conclude that the mind is more than matter and more than a mere by-product of the brain.{1}

One of the most famous findings in this field involved the research of Wilder Penfield. Although he was born in the U.S., he did most of his research in Canada and was later celebrated as “the greatest living Canadian.”

In 1961, Penfield reported a dramatic demonstration of the existence of a mind that is separate from the brain. He found that the mind acted independently of the brain under controlled experimental conditions. His subject was an epileptic patient who had part of the brain exposed. When Penfield used an electrode to stimulate a portion of the cortex, here is what he reported:

When the neurosurgeon applies an electrode to the motor area of the patient’s cerebral cortex causing the opposite hand to move, and when he asks the patient why he moved the hand, the response is: “I didn’t do it. You made me do it.” . . . It may be said that the patient thinks of himself as having an existence separate from his body.

Once when I warned a patient of my intention to stimulate the motor area of the cortex, and challenged him to keep his hand from moving when the electrode was applied, he seized it with the other hand and struggled to hold still. Thus, one hand, under the control of the right hemisphere driven by the electrode, and the other hand, which he controlled through the left hemisphere, were caused to struggle against each other. Behind the “brain action” of one hemisphere was the patient’s mind. Behind the action of the other hemisphere was the electrode.{2}

This experiment (and others like it) demonstrates that there is both a mind and brain. Mind is more than just merely a by product of the brain.

Neuroscience: Opportunities and Challenges

Neuroscience has been making discoveries about the human brain at an incredible rate of speed, and this provides both new opportunities and major ethical challenges. For example, existing brain imaging methods provide scientists with some very powerful tools to discover the structure and function of the human brain. These tools can detect various brain abnormalities. They can also help in the diagnosis of various neurological disorders.

Scientists have also been using these brain imaging machines to study emotions, language, and even our perceptions. It is possible that eventually these machines could even be used to read our thoughts and memories.

Scientists who have developed a brain fingerprinting machine believe they will be able to determine a person’s knowledge of events. By measuring electrical activity within the brain, they can see the response of a person to certain stimuli (words, sounds, pictures). Analysis of these responses might be helpful in various investigations.

Sometimes crime investigators use a polygraph machine to detect lies. But these devices are not completely foolproof. Scientists believe they might be able someday to develop accurate readings from functional magnetic resonance imaging (fMRI) to determine whether a person is telling the truth.

What are the implications of this? Is it possible that one day people who are suspected of a crime will be required to submit to a brain scan? Could brain scans be used to determine high-risk employees, potential criminals, even terrorists? For now, this is mere speculation, but neuroscience may force us to deal with these questions in the future.

Some have even speculated that measurements from these machines could help in distinguishing true memories from false memories. In some experiments, certain areas of the brain appear to respond differently to true memories and false memories.

Could brain scans be used to predict certain neurological disorders? Scientists using fMRI have found that people with schizophrenia have different sizes of key brain structures (e.g., larger lateral ventricles, reduced hippocampus, etc.) than those people without this mental disorder. Many of the ethical questions already surrounding the use of genetic screening would no doubt surface with the application of brain scans that would screen for neurological disorders.

A related question in this growing field of neuroethics is the use of mood altering drugs. Psychopharmacology has already provided pills to treat depression, anxiety, and even attention deficit disorder. Future development in this area will no doubt yield other mood-altering and brain-altering drugs.

In the future, it might be possible to genetically engineer drugs or even genetically engineer human beings to treat and even cure mental disorders. This same technology might also allow scientists to increase memory and perhaps even increase intelligence. For now, the idea of a smart pill is just science fiction. But what if we develop such a medicine? Who should get the pill? Under what conditions would it be administered? These are all questions for the twenty-first century in this growing field of neuroethics.

Erasing Memories

In the film Eternal Sunshine of the Spotless Mind, a couple (played by Jim Carrey and Kate Winslet) undergo a brain procedure that allows them to erase each other from their memories because their relationship has turned sour. The story develops when Joel discovers that his girlfriend, Clementine, has undergone a psychiatrist’s experimental procedure which removes him from her mind. Joel then decides to undergo the same procedure. In the process, however, he rekindles his love for her.

Although the film is science fiction and essentially a thought experiment, erasing memories is something scientists are pursuing right now. They are already testing a pill that, when given after a traumatic event, seems to make resulting memories less intense. The pill appears to blunt memory formation and could be very useful as a treatment. For example, this pill could be used if a person experiences a horrible event (such as a rape or witness to a murder). It would also be helpful to those who have endured an earthquake, hurricane, or tsunami.

Doctors also believe that it would help victims of post-traumatic stress disorder (PTSD). This was a problem first recognized in the Vietnam War and a disorder diagnosed in men and women who have been serving in Iraq and Afghanistan. Those affected often experience mental symptoms (flashbacks) and physical symptoms.

When a traumatic event occurs, the brain is flooded with stress hormones (such as adrenalin) that actually store these memories in different ways than the manner in which memories are normally preserved. These memories seem to be stored in our brain’s hard drive, and therefore seem nearly impossible to erase.

The new pills are a class of drugs known as beta blockers which can cross the blood-brain barrier. They can actually dull the impact of the memory formation by getting to the place where stress hormones work to form these traumatic memories. Scientists believe that they can not only blunt the impact of these memories, they might even prevent PTSD. Some physicians believe it might be possible to cure PTSD by triggering these memories and then administering this new drug to eliminate them.

Not everyone is excited about the prospects of erasing memories. Already we have a variety of drugs that can alter a person’s personality. Antidepressants and tranquilizers are used by millions of people every day. Antipsychotic drugs are used to treat people with such mental disorders as schizophrenia. Erasing a person’s memory with certain drugs would certainly change their personality. Would that change always be for the better?

When researchers working in the area of erasing memories were asked to testify before the President’s Council on Bioethics, there was deep concern. Chairman Leon Kass argued that painful memories serve a purpose and are part of the human experience.

Biblical Perspective

Advances in the field of neuroscience certainly raise new ethical dilemmas for the twenty-first century. But they also challenge the biblical understanding of human nature. Neuroscience is beginning to explain a great deal of human behavior by mapping the human brain. Scientists are locating regions that influence personality, character, and even spirituality. Does this challenge the concept of Cartesian dualism? Can we explain mind as merely a by-product of brain?

One researcher in this field thinks the research does challenge this biblical foundation. She says you “can still believe in what Arthur Koestler called ‘the ghost in the machine’.” But she concludes that “as neuroscience begins to reveal the mechanisms of personality, character, and even sense of spirituality, this Cartesian line of interpretation becomes strained. If these are all features of the machine, why have a ghost at all? By raising questions like this, it seems likely that neuroscience will pose a far more fundamental challenge to religion than evolutionary biology.”{3}

So if you think evolution has been a challenge to Christianity, just wait until the findings of neuroscience reach the society at large. There are large and significant issues that need to be addressed. So what is a Christian perspective on these issues of mind/brain and body/soul?

First, the Bible teaches that when the soul leaves the body, the body is dead (James 2:26). And if the soul returns to the body, the whole person comes back to life (Luke 8:55). This dual nature of the body and soul is documented in many passages of Scripture (Matt. 26:41; Rom. 8:10; 1 Cor. 5:5; 6:17, 20; 7:34; 2 Cor. 7:1; Gal. 5:17).

Second, the New Testament also talks about the resurrection of the body, and Paul elaborates on the nature of this body (1 Cor. 15:35-44). We have the most complete picture of this resurrection body by observing what the Bible tells us about Jesus Christ after His resurrection. Paul tells us this is the body we will have (Phil. 3:20-21).

This resurrection body of Jesus Christ was able to freely pass through physical barriers (walls, locked doors). But it could also be examined for purposes of identification. It is a body that is able to communicate with the physical world (can be seen, heard, felt). Likewise, we can anticipate that our bodies will be able to share a meal and then disappear only to reappear in another location. It will also be a body that can act upon the physical world by moving objects, going for a walk, even starting a fire.

The Bible teaches that we are more than matter. We are both body and soul, mind and brain. Neuroscience is the next frontier for research, and Christians must pay attention to these findings and provide a biblical perspective to the research and an ethical framework for its application.

Notes

1. Arthur C. Custance, The Mysterious Matter of Mind (Grand Rapids: Zondervan/Probe, 1980).

2. Wilder Penfield, in the “Control of the Mind” Symposium, held at the University of California Medical Center, San Francisco, 1961, quoted in Arthur Koestler, Ghost in the Machine (London: Hutchison Publishing Group, 1967), 203-4.

3. Martha J. Farah, “Neuroethics,” Op-Ed, American Medical Association, www.ama-assn.org/ama/pub/category/12727.html.

© 2006 Probe Ministries


Intelligent Design and the Bible

Jan. 16, 2006

Psalm 19 tells us that the heavens declare the glory of God. Romans 1 reminds us that the creation shows His divine attributes. So we shouldn’t be surprised that scientists are finding evidence of design in nature.

The subject of intelligent design is in the news due to school board decisions and court rulings. So it is important that Christians be thinking clearly about this important topic.

When I have an opportunity to speak on the subject of intelligent design, I find that most Christians don’t exactly know what to make of this research. On the one hand, they appreciate that scientists working in such diverse fields as astronomy and biology are finding evidence of design. Whether you look in the telescope at the far dimensions of space or in a microscope at the smallest details of life, God’s fingerprint can be found.

But I also find that Christians are ambivalent about the idea of intelligent design. If you go to the websites of many creationist groups, you will find them to be critical of intelligent design research because it doesn’t identify a creator. They want the scientists to connect the dots of their research to the God of the Bible. I would like to suggest another way of looking at this issue.

Those of us who defend the historical reliability of the Bible often use the good work done by archaeologists. These archaeologists uncover historical evidence that gives us a better picture of the ancient near east. We then take their research and show how it fits with the biblical description of history. Although some archaeologists are Christians, many are not. But that doesn’t keep us from using their research to show the truthfulness of the Bible.

We can think of scientists working on intelligent design in the same way. They are pursuing a line of research that shows design in nature. We can then take their research and show how it fits with the biblical description of creation. Although many of the scientists working on intelligent design are Christians, some are not. That shouldn’t keep us from using their research. We can take their research and connect the dots.

In their book The Privileged Planet, Guillermo Gonzalez and Jay Richards show that the earth is positioned in the best place in our galaxy for complex life to exist. They also show that the earth is also positioned in the best place for scientific discovery. Christian theologians and apologists can take this research and point to the fact that God created the heavens and earth and they show His divine care.

Michael Behe in his book Darwin’s Black Box shows that there are numerous molecular motors within the cell that intricately assembled. He demonstrates that they have irreducible complexity. Christian theologians and apologists can take this research and show that there is evidence of design. Design implies a designer, and the Bible tells us that God is the designer of life.

Scientists working on the subject of intelligent design may not be willing to identify the Creator. But that shouldn’t keep us from using their research to connect the dots and lead people to the Creator.

© 2006 Probe Ministries International


American Indians in American History

Colonial America

Two dark chapters in American history are slavery and the treatment of the American Indian. We have an article on slavery, and in this article we will focus briefly on the story of the American Indians (or Native Americans).

It is difficult to estimate the number of Indians in the Western Hemisphere. In Central and South America, there were advanced civilizations like the Aztecs in Mexico and the Incas in Peru. So it is estimated there was a population of about twenty million before the Europeans came. By contrast, the Indian tribes north of what is now the Mexican border were “still at the hunter-gatherer stage in many cases, and engaged in perpetual warfare” and numbered perhaps one million.{1}

One of the best-known stories from colonial America is the story of John Smith and Pocahontas. John Smith was the third leader of Jamestown. He traded with the Indians and learned their language. He also learned how they hunted and fished.

On one occasion, Smith was captured by the Indians and brought before Chief Powhatan. As the story goes, a young princess by the name of Pocahontas laid her head across Smith’s chest and pleaded with her father to spare his life. This may have been an act of courage or part of the Indian ceremony. In either case, Smith was made an honorary chief of the tribe.

Although the Disney cartoon about Pocahontas ends at this point, it is worth noting that she later met an English settler and traveled to England. There she adopted English clothing, became a Christian, and was baptized.

Another famous story involves Squanto. He was originally kidnapped in 1605 and taken to England where he learned English and was eventually able to return to New England. When he found his tribe had been wiped out by a plague, he lived with a neighboring tribe. Squanto then learned that the Pilgrims were at Plymouth, so he came to them and showed them how to plant corn and fertilize with fish. He later converted to Christianity. William Bradford said that Squanto “was a special instrument sent of God for their good beyond their expectation.”{2}

These stories are typical of the some of the initial interactions between the Indians and the colonists. Relations between the two were usually peaceful, but as we will see, the peace was a fragile one.

Many of the settlers owed their lives to the Indians and learned many important skills involving hunting, trapping, fishing, and farming. Roger Williams purchased land from the Indians to start Providence, Rhode Island, and William Penn bought land from the Indians who lived in present-day Pennsylvania. Others, however, merely took the land and began what became the dark chapter of exploitation of the American Indians.

Indian Wars in New England

Let’s take a look at the history of Indians in New England.

One of the leaders in New England was Roger Williams. He believed that it was right and proper to bring Christianity to the Indians. Unfortunately, “few New Englanders took trouble to instruct Indians in Christianity. What they all wanted to do was to dispossess them of their land and traditional hunting preserves.”{3}

Williams thought this was unchristian and argued that title to all Indian lands should be negotiated at a fair price. He felt anything less was sinful.{4}

Because of this, his Rhode Island colony gained the reputation of being a place where Indians were honored and protected. That colony managed to avoid any conflict with the Indians until King Philip’s War.

King Philip’s War was perhaps the most devastating war between the colonists and the Indians living in the New England area. There had been peace until that time between the Pilgrims and the Wampanoag tribe due to their peace treaty signed in the 1620s.

The war was named for King Philip who was the son of Chief Massasoit. His Indian name was Metacom, but he was called King Philip by the English because he adopted European dress and customs. In 1671, he was questioned by the colonists and fined. They also demanded that the Wampanoag surrender their arms.

In 1675, a Christian Indian who had been working as an informer to the colonists was murdered (probably by King Philip’s order). Three Indians were tried for murder and executed. In retaliation, King Philip led his men against the settlers. At one point they came within twenty miles of Boston itself. If he could have organized a coalition of Indian tribes, he might have extinguished the entire colony.

Throughout the summer and fall of 1675, Philip and his followers destroyed farms and townships over a large area. The Massachusetts governor dispatched military against the Indians with the conflict ending in the fall of 1677 when Philip was killed in battle.

The war was costly to the colonists in terms of lives and finances. It also resulted in the near extermination of many of the tribes in southern New England.

The Pequot War in the 1630s developed initially because of conflict between Indian tribes. It began with a dispute between the Pequots and the Mohicans in the Connecticut River area over valuable shoreline where shells and beads were collected for wampum.

Neither the English nor the nearby Dutch came to the aid of the Mohicans. Thus, the Pequots became bold and murdered a number of settlers. In response, the Massachusetts governor sent armed vessels to destroy two Indian villages. The Pequots retaliated by attacking Wethersfield, Connecticut, killing nine people and abducting two others.{5}

The combined forces of the Massachusetts and Connecticut militia set out to destroy the Pequot. They surrounded the main Pequot fort in 1637 and slaughtered five hundred Indians (men, women, and children). The village was set fire, and most who tried to escape were shot or clubbed to death.{6}

Post Revolutionary America

Chief Tecumseh was a Shawnee chief who lived in the Ohio River Valley and benefited from the British. During the War of 1812, the British had a policy of organizing and arming minorities against the United States. Not only did they liberate black slaves, but they armed and trained many of the Indian tribes.{7}

As thousands of settlers moved into this area, the Indians were divided as to whether to attack American settlements. Tecumseh was not one of them. He refused to sign any treaties with the government and organized an Indian resistance movement against the settlers.

Together with his brother Tenskwatawa, who was also known as “the Prophet,” he called for a war against the white man: “Let the white race perish! They seize your land. They corrupt your women. They trample on the bones of your dead . . . . Burn their dwellings—destroy their stock—slay their wives and children that their very breed may perish! War now! War always! War on the living! War on the dead!”{8}

Tecumseh and “the Prophet” met with other Indian tribes in order to unite them into a powerful Indian confederacy. This confederacy began to concern government authorities especially when the militant Creeks (known as the Red Sticks because they carried bright red war clubs) joined and began to massacre the settlers.

General William Henry Harrison was at that time the governor of the Indiana Territory (he later became president). While Tecumseh was recruiting more Indian tribes, Harrison’s army defeated fighters led by “the Prophet” at the Tippecanoe River. This victory was later used in his presidential campaign (“Tippecanoe and Tyler too”).

American settlers as well as some Indian tribes attempted to massacre the Creeks in the south. When this attempt failed, they retreated to Fort Mims. The Creeks took the fort and murdered over five hundred men, women, and children and took away two hundred fifty scalps on poles.{9}

At this point, Major-General Andrew Jackson was told to take his troops south and avenge the disaster. Those who joined him included David Crockett and Samuel Houston. Two months after the massacre, Jackson surrounded an Indian village and sent in his men to destroy it. David Crockett said: “We shot them like dogs.”{10}

A week later, Jackson won a pitched battle at Talladega, attacking a thousand Creeks and killing three hundred of them. He then moved against the Creeks at Horseshoe Bend. When the Indians would not surrender, they were slain. Over five hundred were killed within the fort and another three hundred drowned trying to escape in the river. Shortly after this decisive battle, the remaining Creeks surrendered.

Trail of Tears

The Cherokee called Georgia home, and they were an advanced Indian civilization. Their national council went back to 1792 and had a written legal code since 1808. They had a representative form of government (with eight congressional districts). But the settlers moving into the state continued to take their land.

When Andrew Jackson was elected president in 1828, it sealed the fate of the Indians. “In his inaugural address he insisted that the integrity of the state of Georgia, and the Constitution of the United States, came before Indian interests, however meritorious.”{11}

In 1830, Congress passed the “Indian Removal Act.” This act forced Indians who were organized tribally and living east of the Mississippi River to move west to Indian Territory. It also authorized the president to use force if necessary. Many Americans were against the act, including Tennessee Congressman Davy Crockett. It passed anyway and was quickly signed by President Jackson.

The Indian tribes most affected by the act were the so-called “civilized tribes” that had adopted many of the ways of the white settlers (Choctaw, Chickasaw, Creek, Seminole, and Cherokee). The Cherokees had actually formed an independent Cherokee Nation.

Cherokee leader John Ross went to Washington to ask the Supreme Court to rule in favor of his people and allow them to keep their land. In 1832, Chief Justice John Marshall and the U.S. Supreme Court ruled that the Cherokee Nation was not subject to the laws of the United States and therefore had a right to their land. The Cherokee would have to agree to removal in a treaty (which would also have to be ratified by the Senate).

A treaty with one of the Cherokee leaders gave Jackson the legal document he needed to remove the Indians. The U.S. Senate ratified the treaty by one vote over the objections of such leaders as Daniel Webster and Henry Clay.

In one of the saddest chapters in American history, the Indians were taken from their land, herded into makeshift forts, and forced to march a thousand miles. Often there was not enough food or shelter. Four thousand Cherokees died on the march to Oklahoma. This forced removal has been called “the Trail of Tears.”

The Seminole resisted this forced march. Their leader Osceola fought the U.S. Army in the swamps of Florida with great success. However, when the Seminoles raised the white flag in truce, the U.S. Army seized Osceola. He died in prison a year later.

Those who made it to Oklahoma did not fare much better. Although Oklahoma was Indian Territory, settlers began to show interest in the land. So the government began to push Indians onto smaller and smaller reservations. The final blow came with the Homestead Act of 1862 which gave one hundred sixty acres to anyone who paid a ten-dollar filing fee and agreed to improve the land for five years.

Indian Wars in the West

Until the 1860s, the Plains Indians were not significantly affected by the white man. But the advance of the settlers and the transcontinental railroad had a devastating impact on their way of life. The railroads cut the Great Plains in half so that the west was no longer the place where the buffalo roam. Prospectors ventured onto Indians lands seeking valuable minerals. So it was inevitable that war would break out. Between 1869 and 1878, over two hundred pitched battles took place primarily with the Sioux, Apache, Comanche, and Cheyenne.

The impact of an endless stream of settlers had the effect of forcing the Plains Indians onto smaller and smaller reservations. Even though the government signed various treaties with the Indians, they were almost always broken. Approximately three hundred seventy treaties were signed from 1778 to 1871 while an estimated eighty or ninety agreements were also entered into between 1871 and 1906.{12}

One of the most famous Indian battles was “Custer’s Last Stand.” Sioux and Cheyenne warriors, led by Crazy Horse and Sitting Bull, fought against Lieutenant Colonel George Armstrong Custer. The Battle of Little Big Horn actually wasn’t much of a battle. Custer was ordered to observe a large Sioux camp. But he decided to attack even though he was warned they might be greatly outnumbered. It turns out they were outnumbered ten to one. Within an hour, Custer and all his men were dead.

Custer’s defeat angered many Americans, so the government fought even more aggressively against the Indians. Many historians believe that the anger generated by “Custer’s Last Stand” led to the slaughter of Sioux men, women, and children at Wounded Knee in 1890. After the death of Sitting Bull, a band of Sioux fled into the badlands, where they were captured by the 7th Cavalry. The Sioux were ordered disarmed, but an Indian fired a gun and wounded an officer. The U.S. troops opened fire, and within minutes almost two hundred men, women, and children were killed.

The Apache leader Geronimo led many successful attacks against the army. By 1877, the Apache had been forced onto reservations. But on two separate occasions, Geronimo planned escapes and led resistance efforts from mountain camps in Mexico. He finally surrendered in 1886.

Chief Joseph of the Nez Percé in the Northwest built friendships with trappers and traders since the first expedition by Lewis and Clark. He refused to sign treaties with the government that would give up their homeland. Eventually fighting broke out, so Chief Joseph led his people to Canada. Unfortunately, they were surrounded by soldiers just forty miles from Canada. Chief Joseph died at a reservation in Washington State in 1904.

This is the sad and tragic story of the American Indian in American history. We cannot change our history, and we should not rewrite our history. Neither should we ignore the history of the American Indian in the United States.

Notes

1. Paul Johnson, A History of the American People (New York: HarperCollins Publishers, 1997), 7.
2. William Bradford, History of Plymouth Plantation, c. 1650.
3. Johnson, 47.
5. Johnson, 76.
6. Alden T. Vaughn, The New England Frontier: Puritans and Indians, 1620-1675 (Boston: Little Brown & Company, 1965).
7. Reginald Horsman, “British Indian Policy in the North-West 1807-1812,” Mississippi Valley Historical Review, April 1958.
8. J. F.H. Claiborne, Mississippi as Providence, Territory and State, 3 quoted in Robert V. Remini, Andrew Jackson and the Course of American Freedom 1822-32, (New York: Harper and Row, 1981), i.
9. H. S. Halbert and T. S. Hall, The Creek War of 1813-14 (Tuscaloosa, 1969), 151ff.
10. David Crockett, A Narrative of the Life of David Crockett of the State of Tennessee, 1834.
11. Johnson, 350.
12. Charles M. Harvey, “The Red Man’s Last Roll-Call,” Atlantic Monthly 97 (1906), 323-330.

© 2006 Probe Ministries


Myths About Intelligent Design

January 1, 2006

In December a decision by U.S. District Judge John Jones in Dover, Pennsylvania once again put the topic of intelligent design in the news. He ruled that the school board’s actions were unconstitutional and merely an attempt to smuggle religious views into a science classroom.

Media coverage of the Dover case and the broader topic of intelligent design have often been inadequate. When I have spoken on this subject, I have found that many Christians don’t have an accurate perspective on this subject. So let me take a moment to address some of the myths surrounding this scientific theory.

First, proponents of intelligent design are not trying to smuggle religion into the classroom. While that may have been the intent of some of the Dover school board members, it is clear that is not the desire of scientists working on intelligent design. The Discovery Institute is one of the leading think tanks in the area of intelligent design and it actually opposes the idea of requiring it be taught in the classroom. They are pursuing it as a scientific theory not as a public school curriculum.

It might be worth noting that what Judge Jones struck down was a requirement that a short statement be read in class that mentioned the phrase “intelligent design” twice. It also allowed students to look at a supplemental text on intelligent design titled Of Pandas and People. The students would be instructed from the standard biology textbook published by Prentice Hall, but would be allowed to also read from the supplemental text if they desired.

Second, intelligent design is not just the latest modified attempt to introduce creationism into the classroom. Judge Jones and the media make it seem like the same people who promoted scientific creationism in the 1970s and 1980s are the same people pushing intelligent design now. That is not the case. None of the leaders of the intelligent design movement have been involved with creationist groups like the Institute for Creation Research or Answers in Genesis or Reasons to Believe. In fact, if you go to the websites of many creation groups, you will find they are often critical of intelligent design because it does not specifically identify a creator.

Third, intelligent design is much more than a refutation of evolution. It provides a positive model that can be tested. Judge Jones argued that “the fact that a scientific theory cannot yet render an explanation on every point should not be used as a pretext to thrust an untestable alternative hypothesis grounded in religion into a science classroom.”

Scientists pursuing intelligent design are doing much more than just criticizing evolution. They are proposing new ideas that can be tested. For example, Michael Behe (author of the book Darwin’s Black Box) suggests that molecular motors within the cell exhibit what he calls irreducible complexity. He shows that the bacterial flagellum requires numerous parts to all be present simultaneously for it to function. It is a testable model that other scientists can verify or refute using scientific data.

The ruling by Judge Jones won’t end the debate about intelligent design. But at least when we debate its merits or flaws, we should get our facts straight.

© 2005 Probe Ministries International


Stem Cell Wars

December 17, 2005

The political war over stem cell research is heating up as evidenced by two recent events in the media. For the last few weeks, Senate Democrats have blocked action on a bill that would allow the use of umbilical cord blood in stem cell research. Although the bill passed the House by a remarkable vote of 431-1, the democratic leadership in the Senate would not allow a vote on the measure. The bill was even endorsed by the Congressional Black Caucus due to the positive appeal from former basketball star Julius (Dr. J.) Erving.

Also in the news was the decision by University of Pittsburgh’s Gerald Schatten to quit the human cloning project of South Korean scientist Dr. Hwang Woo Suk. Dr. Schatten cited ethical concerns about possible coercion in obtaining eggs from female project staffers. Dr. Schatten also demanded that his name be removed from an article he co-wrote with Dr. Hwang for the journal Science because he believes it used fraudulent photographs in the article.

Background

Stem cells are the basic cells in our body. They get their name from their similarity to the stem of a plant which gives rise to branches, bark, and every other part of a plant. Embryonic stem cells are the cells from which all 210 different kinds of tissue in the human body originate. As an embryo develops into a blastocyst, a few layers of cells surround a mass of stem cells. If these stem cells are removed from the blastocyst, they cannot develop as an embryo but can be cultured and grown into these different tissues.

Stem cells are undifferentiated and self-replicating cells that have the potential to become the other differentiated cells in our body. And that is why there is so much scientific and political attention being paid to stem cells.

The potential for stem cell research is enormous and intoxicating. Nearly 100 million Americans have serious diseases that eventually may be treated or even cured by stem cell research. Many diseases (like Parkinson’s, heart disease, diabetes) result from the death or dysfunction of a single cell type. Scientists hope that the introduction of healthy cells of this type will restore lost or compromised function.

Moral Perspective

The moral problem with the research is that to obtain human embryonic stem cells, the embryo is destroyed. Embryos needed for human embryonic stem cell research can be obtained from three sources: (1) in-vitro fertilization used to produce embryos, (2) frozen embryos which are spare embryos left over from in-vitro fertilization, or (3) human cloning of embryos.

In addition to the moral problem is the scientific reality that embryonic stem cell research has not been successful. Although human embryonic stem cells have the potential to become any type of human cell, no one has yet mastered the ability to direct these embryonic cells in a way that can provide possible therapy for humans afflicted with various diseases.

Numerous stories are surfacing of the problems with human embryonic stem cells. One example took place in China where scientists implanted human embryonic stem cells into a patient suffering from Parkinson’s only to have them transform into a powerful tumor that eventually killed him.

Often the media has not been telling the truth about embryonic stem cell research. So why hasn’t the media accurately covered this issue? “To start with, people need a fairy tale,” said Ronald D.G. McKay, a stem cell researcher at the National Institute of Neurological Disorders and Stroke. “Maybe that’s unfair, but they need a story line that’s relatively simple to understand.”

What has been lost in all of this discussion is the humanity of the unborn. Proponents of embryonic stem cell research argue that an embryo or fetus is a “potential” human life. Yet at every stage in human development (embryo, fetus, child, adult), we retain our identity as human beings. We are humans from the moment of conception. We do not have the right to dismember a human embryo because it’s unwanted or located in a test tube in a fertility clinic.

Also lost in this discussion is the success of using stem cells from sources other than embryos. Successful clinical trials have shown that adult stem cells as well as umbilical cord blood have been very effective. These sources may provide cures for such diseases as multiple sclerosis, rheumatoid arthritis, systematic lupus, etc. Some studies seem to indicate that adult stem cells create “fewer biological problems” than embryonic ones.

No moral concerns surround the use of human adult stem cells since they can be obtained from the individual requiring therapy. And using blood from umbilical cords of newborns does not raise any significant concerns because the newborn is not harmed in any way.

In the last few years, stem cells have also been found in tissues previously thought to be devoid of them (e.g., neural tissue, nasal passages). And human adult stem cells are also more malleable than previously thought. For example, bone marrow stem cells can produce skeletal muscle, neural, cardiac muscle, and liver cells. Bone marrow cells can even migrate to these tissues via the circulatory system in response to tissue damage and begin producing cells of the appropriate tissue type.

Human adult stem cell research is already effective and raises none of the moral questions of human embryonic stem cell research. Even biotech industry proponents of embryonic stem cell research believe that we may be twenty years away from developing commercially available treatments using embryonic stem cells.

All of this, however, seems lost on some in Congress who continue to push for additional funding of embryonic stem cell research. When democratic leaders in the Senate hold up a cord blood bill that will help people just to get a vote on an embryonic stem cell bill, they clearly have the wrong priorities. Adult stem cell research is already effective. Embryonic stem cell research is not.

© 2005 Probe Ministries International

***BIN_ID:{260B525E-DBCC-48E2-B363-4B8FFBF9EBFC}***


Is the World Flat? How Should Christians Respond in Today’s Global World

Drawing from Thomas Friedman’s book, The World is Flat, Kerby Anderson looks at some of the major new factors in our world which cause not only countries and companies, but also individuals to think and act globally. Most of the factors discussed are givens against which Kerby helps us to consider their impact on Christianity and the spread of the gospel on a global basis.

Introduction

Is the world flat? The question is not as crazy as it might sound in light of the book by Thomas Friedman entitled The World is Flat: A Brief History of the Twenty-First Century. His contention is that the global playing field has been leveled or flattened by new technologies.

In fourteen hundred and ninety-two when Columbus sailed the ocean blue, he used rudimentary navigational equipment to prove that the earth was round. More than 500 years later, Friedman discovered in a conversation with one of the smartest engineers in India that essentially the world was flat. Friedman argues that we have entered into a third era of globalization, which he calls Globalization 3.0 that has flattened the world.

The first era of globalization (he calls Globalization 1.0) lasted from when Columbus set sail until around 1800. “It shrank the world from a size large to a size medium. Globalization 1.0 was about countries and muscles.”{1} The key change agent in this era was how much muscle your country had (horsepower, wind power, etc.). Driven by such factors as imperialism and even religion, countries broke down walls and began the process of global integration.

The second era (he calls Globalization 2.0) lasted from 1800 to 2000 with interruptions during the Great Depression and World Wars I and II. “This era shrank the world from size medium to a size small. In Globalization 2.0, the key agent of change, the dynamic force driving global integration, was multinational companies.”{2} At first these were Dutch and English joint-stock companies, and later was the growth of a global economy due to computers, satellites, and even the Internet.

The dynamic force in Globalization 1.0 was countries globalizing, while the dynamic force in Globalization 2.0 was companies globalizing. Friedman contends that Globalization 3.0 will be different because it provides “the newfound power for individuals to collaborate and compete globally.”{3}

The players in this new world of commerce will also be different. “Globalization 1.0 and 2.0 were driven primarily by European and American individuals and businesses. . . . Because it is flattening and shrinking the world, Globalization 3.0 is going to be more and more driven not only by individuals but also by a much more diverse—non-Western, non-white—group of individuals. Individuals from every corner of the flat world are being empowered.”{4}

The Flatteners

Friedman argues in his book that the global playing field has been flattened by new technologies.

The first flattener occurred on November 9, 1989. “The fall of the Berlin Wall on 11/9/89 unleashed forces that ultimately liberated all the captive peoples of the Soviet Empire. But it actually did so much more. It tipped the balance of power across the world toward those advocating democratic, consensual, free-market-oriented governance, and away from those advocating authoritarian rule with centrally planned economies.”{5}

The economic change was even more important. The fall of the Berlin Wall encouraged the free movement of ideas, goods, and services. “When an economic or technological standard emerged and proved itself on the world stage, it was much more quickly adopted after the wall was out of the way.”{6}

Thomas Friedman also makes a connection between the two dates 11/9 and 9/11. He noted that in “a world away, in Muslim lands, many thought [Osama] bin Laden and his comrades brought down the Soviet Empire and the wall with religious zeal, and millions of them were inspired to upload the past. In short, while we were celebrating 11/9, the seeds of another memorable date—9/11—were being sown.”{7}

A second flattener was Netscape. This new software played a huge role in flattening the world by making the Internet truly interoperable. Until then, there were disconnected islands of information.

We used to go to the post office to send mail; now most of us send digitized mail over the Internet known as e-mail. We used to go to bookstores to browse and buy books, now we browse digitally. We used to buy a CD to listen to music, now many of us obtain our digitized music off the Internet and download it to a MP3 player.

A third flattener was work flow software. As the Internet developed, people wanted to do more than browse books and send e-mail. “They wanted to shape things, design things, create things, sell things, buy things, keep track of inventories, do somebody else’s taxes, and read somebody else’s X-rays from half a world away. And they wanted to be able to do any of these things from anywhere to anywhere and from any computer to any computer—seamlessly.”{8}

All the computers needed to be interoperable not only between departments within a company but between the systems of any other company. Work flow software made this possible.

Where will this lead? Consider this likely scenario. When you want to make a dentist appointment, your computer translates your voice into a digital instruction. Then it will check your calendar against the available dates on the dentist’s calendar. It will offer you three choices, and you will click on the preferred date and hour. Then a week before your appointment, the dentist’s calendar will send you an e-mail reminding you of the appointment. The night before your appointment, a computer-generated voice message will remind you.

The fourth flattener is open-sourcing. Open-source comes from the idea that groups would make available online the source code for software and then let anyone who has something to contribute improve it and let millions of others download it for free.

One example of open-source software is Apache which currently powers about two-thirds of the websites in the world. Another example of open-sourcing is blogging. Bloggers are often one-person online commentators linked to others by their common commitments. They have created essentially an open-source newsroom.

News bloggers were responsible for exposing the bogus documents use by CBS and Dan Rather in a report about President Bush’s Air National Guard service. Howard Kurtz of The Washington Post wrote (Sept 20, 2004): “It was like throwing a match on kerosene-soaked wood. The ensuing blaze ripped through the media establishment as previously obscure bloggers managed to put the network of Murrow and Cronkite on the defensive.”

Another example of open-sourcing is the Wikipedia project which has become perhaps the most popular online encyclopedia in the world. Linux is another example. It offers a family of operating systems that can be adapted to small desktop computers or laptops all the way up to large supercomputers.

A fifth flattener is outsourcing. In many ways, this was made possible when American companies laid fiber-optic cable to India. Ultimately, India became the beneficiary.

India has become very good at producing brain power, especially in the sciences, engineering, and medicine. There are a limited number of Indian Institutes within a population of one billion people. The resulting competition produces a phenomenal knowledge meritocracy. Until India was connected, many of the graduates would come to America. “It was as if someone installed a brain drain that filled up in New Delhi and emptied in Palo Alto.”{9}

Fiber-optic cable became the ocean crosser. You no longer need to leave India to be a professional because you can plug into the world from India.

A sixth flattener was offshoring. Offshoring is when a company takes one of its factories that is operating in Canton, Ohio and moves the whole factory to Canton, China.

When China joined the World Trade Organization, it took Beijing and the rest of the world to a new level of offshoring. Companies began to shift production offshore and integrate their products and services into their global supply chains.

The more attractive China makes itself offshoring, the more attractive other developed and developing countries have to make themselves. This created a process of competitive flattening and a scramble to give companies the best tax breaks and subsidies.

How does this affect the United States? “According to the U.S. Department of Commerce, nearly 90 percent of the output from U.S.-owned offshore factories is sold to foreign consumers. But this actually stimulates American exports. There is a variety of studies indicating that every dollar a company invests overseas in an offshore factory yields additional exports for its home country, because roughly one-third of global trade today is within multi-national companies.”{10}

The seventh flattener is supply chaining. “No company has been more efficient at improving its supply chain (and thereby flattening the world) than Wal-Mart; and no company epitomizes the tension the supply chains evoke between the consumer in us and the worker in us than Wal-Mart.”{11}

Thomas Friedman calls Wal-Mart “the China of companies” because it can use its leverage to grind down any supplier to the last halfpenny. And speaking of China, if Wal-Mart were an individual economy, it would rank as China’s eighth-biggest trading partner, ahead of Russia, Australia and Canada.

An eighth flattener is what Friedman calls insourcing. A good example of this is UPS. UPS is not just delivering packages, the company is doing logistics. Their slogan is Your World Synchronized. The company is synchronizing global supply chains.

For example, if you own a Toshiba laptop computer under warranty that you need fixed, you call Toshiba. What you probably don’t know is that UPS will pick up your laptop and repair it at their own UPS-run workshop dedicated to computer and printer repair. They fix it and return it in much less time than it would take to send it all the way to Toshiba.

A ninth flattener is in-forming. A good example of that is Google. Google has been the ultimate equalizer. Whether you are a university professor with a high speed Internet connection or a poor kid in Asia with access to an Internet café, you have the same basic access to research information.

Google puts an enormous amount of information at our fingertips. Essentially, all of the information on the Internet is available to anyone, anywhere, at anytime.

Friedman says that, “In-forming is the ability to build and deploy your own personal supply chain—a supply chain of information, knowledge, and entertainment. In-forming is about self-collaboration—becoming your own self-directed and self-empowered researcher, editor, and selector of entertainment, without having to go to the library or movie theater or through network television.”{12}

A tenth flattener is what he calls “the steroids.” These are all the things that speed the process (computer speed, wireless).

For example, the increased speed of computers is dazzling. The Intel 4004 microprocessor (in 1971) produced 60,000 instructions per second. Today’s Intel Pentium 4 Extreme has a maximum of 10.8 billion instructions per second.

The wireless revolution allows anyone portable access to everything that has been digitized anywhere in the world. When I was at graduate school at Yale University, all of us were tied to a single mainframe computer. In order to use the computer, I had to hand computer cards to someone in the computer lab in order to input data or extract information. Now thanks to digitization, miniaturization, and wireless I can do all of that and much more from my home, office, coffee shop, airport—you name it.

Biblical Perspective

Although futurists have long talked about globalization and a global village, many of these forces have made that a reality. At this point it might be valuable to distinguish between globalization and globalism. Although these terms are sometimes used interchangeably, I want to draw some important distinctions. Globalization is used to describe the changes taking place in society and the world due to economic and technological forces. Essentially, we have a global economy and live in the global village.

Globalism is the attempt to draw us together into a new world order with a one world government and one world economy. Sometimes this even involves a desire to develop a one world religion. In a previous article (“Globalism and Foreign Policy“), I addressed many of the legitimate concerns about this push towards global government. We should be concerned about political attempts to form a new world order.

On the other hand, we should also recognize that globalization is already taking place. The World is Flat focuses on many of the positive aspects of this phenomenon, even though there are many critics would believe it may be harmful.

Some believe that it will benefit the rich at the expense of the poor. Some believe it will diminish the role of nations in deference to world government. These are important issues that we will attempt to address in future articles.

For now, let’s look at some important implications of a flat world. First, we should prepare our children and grandchild for global competition. Thomas Friedman says that when he was growing up his parents would tell him “Finish your dinner. People in China and India are starving.” Today he tells his daughters, “Girls, finish your homework—people in China and India are starving for your jobs.”{13}

Another implication is the growing influence of the two countries with the largest populations: China and India. Major companies are looking to these countries for research and development. The twentieth century was called “the American Century.” It is likely that the twenty-first century will be “the Asian Century.”

These two countries represent one-third of the world’s population. They will no doubt transform the entire global economy and political landscape.

Students of biblical prophecy wonder if these two countries represent the “Kings of the East” (Rev. 16:12). In the past, most of the focus was only on China. Perhaps the Kings (plural) represent both China and India.

A final implication is that this flattened world has opened up ministry through the Internet and subsequent travel to these countries. Probe Ministries, for example, now has a global ministry. In the past, it was the occasional letter we received from a foreign country. We now interact daily with people from countries around the world.

Last month the Probe website had nearly a quarter of a million visitors from over 140 countries. These online contacts open up additional opportunities for speaking and ministry overseas.

The flattening of the world may have its downsides, but it has also opened up ministry in ways that were unimaginable just a few years ago. Welcome to the flat world.

Notes

  1. Thomas Friedman, The World is Flat: A Brief History of the Twenty-First Century (New York: Farrar, Straus and Giroux, 2005), 9.
  2. Ibid.
  3. Ibid., 10.
  4. Ibid., 11.
  5. Ibid., 49.
  6. Ibid., 52.
  7. Ibid., 55.
  8. Ibid., 73.
  9. Ibid., 105.
  10. Ibid., 123.
  11. Ibid., 129.
  12. Ibid., 153.
  13. Ibid, 237.

© 2005 Probe Ministries


“What’s Dominionism?”

Mr. Anderson:

I heard you say on Point of View that your guest, Craig Parshall, can speak on many issues. You were talking about that PBS person, Bill Moyers.

What’s this “dominionism” thing? I went to Wikipedia and it doesn’t sound like anything a true follower of Christ Jesus would want to be involved with.

I noticed that the May 2005 issue of Harpers magazine that Craig Parshall was talking about on the program actually used the term dominionism. I really think the authors in that magazine article and in the Wikipedia entry are misusing the term.

Dominion theology defines a small group of postmillennial Christians who are part of the Christian Reconstruction movement. They are trying to bring about God’s kingdom on earth through government, societies, and cultures. That would not describe the theology or agenda of the members of the National Religious Broadcasters or the National Association of Evangelicals.

In fact, I can’t think of a single prominent leader in either of these organizations that would hold to that theological position. Perhaps there is one that I don’t know about, but it certainly does not describe the theology of NRB or NAE.

To put it simply, I don’t think the term “dominionist” in the magazine or even in the Wikipedia entry is a fair description of the evangelical leadership in America.

Thanks for writing.

Kerby Anderson

© 2005 Probe Ministries


“I Have Some Questions on the Separation of Church and State”

Mr. Anderson,

I read your article on the Separation of Church and State and have a few questions for you. At the end of your article you wrote of an “‘open public square’ (where government neither censors nor sponsors religion but accommodates religion).” First of all, I’m curious as to whether you feel that the architects of the First Amendment intended for the protection of religion in general (as in Christianity, Judaism, Islam, Buddhism, etc.), or for the protection of strictly Christianity, as many of them were Christians, or at least claimed to be Christians? In addition to the latter part of that question, do you feel it was added more to prevent the rights, morals, etc. of Christians from being infringed on by a future non-Christian president, or do you feel it was added in order that a Christian president did not infringe on the beliefs of those of other faiths? Secondly, I am wondering as to the purpose of an “open public square” in the context of religions other than Christianity. Ideally, how would you see something like that functioning?

Thank you for your questions about the separation of church and state. Let me try to answer them in order.

1. Did the architects of the First Amendment intend to protect religion in general?

Although the primary religious faith in the 18th century was Christianity, it certainly appears that the framers intended the First Amendment to be inclusive of all religious faiths. For example, in James Madison’s Memorial and Remonstrance, he says:

Because we hold it for a fundamental and undeniable truth, that religion or the duty which we owe to our Creator and the manner of discharging it, can be directed only by reason and conviction, not by force or violence.

He seems to be defining religion as the duty we owe to our Creator. I would take that to apply to nearly any religion, not just the Christian religion.

2. Was it added to prevent the rights and moral of Christians from being infringed?

Some who ratified the Constitution did not even want a Bill of Rights, but others would not ratify the Constitution unless there were specific protections to prevent the encroachment of the newly formed federal government. The framers clearly stated that Congress shall make no law meaning that the federal government can’t tell citizens what to pray, what to read, what to think, or even where to assemble. These protections apply to all citizens, not just to Christians.

3. What is the purpose of an open public square?

As I mentioned in my article, I believe that this would be a world in which all religious perspectives would be given an opportunity to express themselves in the public square. Although we supposedly live in a society dedicated to tolerance and civility (see my article on this topic), religious values are often stripped from the public square. This naked public square only seems to permits secular ideas and values rather than all ideas and values.

A good example of an open public square would be the Equal Access Act passed by Congress in 1984. Religious students should have the same equal access to school facilities as non-religious students. If a school allows the debate club or the Spanish club to utilize the school facilities after school, they should also allow students who want to start a Bible club to have the same privileges.

Kerby Anderson

© 2005 Probe Ministries


Video Games – Evaluating Them From a Christian Perspective

Grand Theft Auto

The best-selling video game in America last year was “Grand Theft Auto: San Andreas.” The recent controversy over this popular video game is just another reminder of the deception of ratings and the need for parental direction and discernment when it comes to buying video games.

The game in question already has a bad reputation. The National Institute on Media and the Family described it this way: “Raunchy, violent and portraying just about every deviant act that a criminal could think of in full, living 3D graphics. Grand Theft Auto takes the cake again as one of the year’s worst games for kids. The premise—restore respect to your neighborhood as you take on equally corrupt San Andreas police.”{1}

Ironically what caused the controversy over the game was not its overt violence and sexuality. What caused a national stir was what was hidden within the game. Those playing the game (known as gamers) could download a modification of “Grand Theft Auto” that would allow them to see graphic sex scenes on screen.

Initially the distributor distanced itself from what hackers could do with their product once it was on the market. But that argument fell flat when it was found that the downloaded modification merely unlocked pornographic material already within the game. It now turns out that skilled players can unlock the pornographic content without downloading the key from the Internet. The game initially had a “Mature” rating. The Entertainment Software Ratings Board now requires that it be labeled “Adults Only.”

“Grand Theft Auto” has already been a lightning rod for controversy because it rewards players for committing crimes and engaging in dangerous and immoral behavior. Gamers can buy and sell drugs, steal cars, run down pedestrians, even feed people into a wood chipper. Nevertheless, the game has sold more than five million copies in the United States.

Who is buying this game? Some are adults buying the game for themselves, but a large percentage of the people buying this game are parents or grandparents buying the game for their kids or grandkids.

Columnist Mona Charen points out that the original concerns about this game surfaced when a Manhattan grandmother bought the game for her fourteen-year-old grandson. Then she was shocked to find out that he could modify the game by downloading material from the Internet. Charen asks, “So, a kindly eighty-five-year-old lady has no qualms about purchasing a gang-glorifying, violence-soaked, sick entertainment for her teenage grandson, but is shocked when it turns out to contain explicit sex? Wasn’t the rest enough?”{2}

In most cases, parents and grandparents are buying these games and need to exercise discernment. Many games are harmless and even can help stimulate the mind. Some are questionable. And others are violent and sexually explicit. We need to use discernment in selecting these games.

Benefits of Video Games

A recent article in Discover magazine talked about the perception most people have of video game players. It said this is “the classic stereotype of gamers as attention-deficit-crazed stimulus junkies, easily distracted by flashy graphics and on-screen carnage.”{3} Yet new research shows that gaming can be mentally enriching with such cognitive benefits as: pattern recognition, system thinking, and even patience.{4}

One of the best-known studies (done by Shawn Green and Daphne Bavelier) found that playing an action video game markedly improved performance on a range of visual skills related to detecting objects in briefly flashed displays. They found that gamers exhibit superior performance relative to non-gamers on a set of benchmark visual tasks.{5}

What they found was the action video gamers tend to be more attuned to their surroundings. While this occurs while performing within the video game, it also transfers to such things as driving down a residential street where they are more likely than a non-gamer to pick out a child running into the street after a ball.

They found that gamers can process visual information more quickly and can track 30 percent more objects than non-gamers. These conclusions came from testing both gamers and non-gamers with a series of three tests.

The first test flashed a small object on a screen for 1/160 of a second and the participant would indicate where it flashed. Gamers tended to notice the object far more often than non-gamers.

The second test flashed a number of small objects on a screen at once. The subjects had to type the number of objects they saw. Gamers saw the correct number more often than non-gamers.

The third test flashed black letters and one white letter on a screen in fast succession. The one white letter was sometimes followed by a black “X.” Gamers were able to pick out the white letter more often than non-gamers and could more accurately say whether it was followed by a black “X.”

The researchers also wanted to know whether the superior performance of gamers was acquired or self-selected. In other words, do video games actually improve visual attention skills or is it possible that visually attentive people choose to play video games?

Green and Bavelier trained a selection of non-gamers on one of two video games. One group played the World War II action video game “Medal of Honor.” The other group served as the control group and played the puzzle game “Tetris.” The researchers found that after two weeks, the group trained on the World War II game showed a marked increase in performance over the control group.

The researchers therefore concluded: “By forcing players to simultaneously juggle a number of varied tasks (detect new enemies, track existing enemies and avoid getting hurt, among others), action-video-game playing pushed the limits of three rather different aspects of visual attention.”{6}

Video games can also train our brain to be more efficient. In the early 1990s, Richard Haier (University of California at Irving’s Department of Psychiatry and Human Behavior), scanned the brains of “Tetris” players. He found that in first-time users, the brain requires lots of energy. In fact, cerebral glucose metabolic rates actually soar. But after a few weeks, these rates sink to normal as performance increases seven-fold.{7} In essence, “Tetris” trains your brain to stop using inefficient gray matter.

Types of Video Games

Let’s now focus on the rating of video games and the major video game categories. As we mentioned earlier, the video game industry is self-regulated, so we need to exercise discernment.

EC – Early Childhood (age 3 and older) – These games are appropriate for anyone who can play a video game and contains no inappropriate material.

E – Everyone (age 6 and older) – These games are designed for younger players and are the equivalent of a PG movie.

T – Teen (age 13 and older) – Generally these games are not appropriate for younger ages and are equivalent of a PG-13 movie.

M – Mature (age 17 and older) – These games are not appropriate for children. They may be rated as such because of overt violence, sexual content, and profanity.

AO – Adults Only (ages 18 and older) – These games involve excessive violence, sexual content, and explicit language.

There are a number of different types of video games.

Puzzles – Puzzle games are usually acceptable for all ages and generally are rated “E.” These games involve logic and spatial arrangements. The best known puzzle game is “Tetris.”

Strategy – These games may be as straightforward as “Chessmaster” or involve the use of tactical moves of troops or players such as “Advanced Wars.”

Simulation games – Some games like “SimCity” require creativity and advanced problem-solving skills. Others involve driving or flying simulations that can be relatively tame or highly offensive such as the “Grand Theft Auto” series of video games.

Arcade games – The classic arcade games include such favorites as “Pacman” or “Frogger.” However, the newer arcade games may include games like the violent “Street Fighter.”

Role playing games – This is a type of game where players assume the roles of via role-playing. Although these games may be less graphic, they often involve fantasy and even the occult.

Action games – These games most often have an “M” rating. Many of these action games involve point-and-shoot games that are especially dangerous.

Violent Video Games

There is cause for concern about violent video games. According to the American Academy of Pediatrics, playing violent video games increases the likelihood of adolescent violent behavior by as much as 13 percent to 22 percent.{8}

A 2005 meta-analysis of over thirty-five research studies (that included 4000 participants) found that “playing violent video games significantly increases physiological arousal and feelings of anger or hostility, and significantly decreases pro-social helping behavior.”{9} Another study has shown a relationship between playing violent video games and being involved in violent acts.{10}

Testimony before the United States Senate documents the following: (1) that violent video games increase violent adolescent behavior, (2) that heavy game players become desensitized to aggression and violence, (3) that nearly 90 percent of all African-American females in these games are victims of violence, and (4) that the most common role for women in violent video games is as prostitutes.{11}

One of the people speaking out against violent video games is Lt. Col. Dave Grossman, whom I have interviewed on a number of occasions. He is a former West Point professor and has written books on the subject of killing.{12} He has also testified that these violent video games are essentially “killing simulators.”

Grossman testified on the shooting in Paducah, Kentucky. Michael Carneal, a fourteen-year-old boy who had never fired a handgun before, stole a pistol and fired a few practice shots the night before. The next morning he fired eight shots and had eight hits (four of them head shots, one neck, and three upper torso). This is unprecedented marksmanship for a boy who only fired a .22 caliber rifle once at a summer camp.

The typical response in firing a gun is to fire at the target until it drops. Carneal instead moved from victim to victim just like he had learned in the violent video games he played.

The goal in these games is to rack up the “highest score” by moving quickly. Grossman points out that many of the games (such as “House of the Dead” or “Goldeneye” or “Turok”) give bonus points for head shots.{13}

Does that mean that anyone who plays these games will be a killer? Of course not. But Grossman says that the kind of training we give to soldiers (operant conditioning, desensitization, etc.) is what we are also giving to our kids through many of these violent video games.

Ironically, the U.S. Marine Corps licensed one of these popular video games (“Doom”) to train their combat fire teams in tactics and to rehearse combat actions of killing.{14} The video game manufacturers certainly know these are killing simulators. In fact the advertising for one game (“Quake II” that is produced by the same manufacturer as “Doom”), says: “We took what was killer, and made it mass murder.”

Biblical Discernment

If we look back at the list of different types of video games, it is pretty easy to see that it is possible to find acceptable games as well as questionable and even dangerous video games in just about any category. That is why parental direction and discernment are so important.

The latest controversy over “Grand Theft Auto” demonstrates that the video game industry has not been effective at self-regulation. And children cannot be expected to exercise good judgment unless parents use discernment and teach it to their kids.

Paul tells us in Philippians 4:8, “Finally, brothers, whatever is true, whatever is noble, whatever is right, whatever is pure, whatever is lovely, whatever is admirable—if anything is excellent or praiseworthy—think about such things.” We should focus on what is positive and helpful to our Christian walk.

As Christians, we should develop discernment in our lives. See my article on “Media and Discernment” (www.probe.org/faith-and-culture/culture/media-and-discernment.html) for suggestions on how to develop discernment in your life and the life of your child.

Parents need to determine the possible benefits to playing videos and whether those benefits outweigh the negatives. Many of the games available today raise little or no concern. As one commentator put it, “The majority of video games on the best-seller list contain no more bloodshed than a game of Risk.”{15}

But even good, constructive games played for long periods of time can be detrimental. Over the last few years I have been compiling statistics for my teen talk on media use. The number of hours young people spend watching TV, listening to music, surfing the Internet, going to movies, etc. is huge and increasing every year. Young people spend entirely too much time in front of a screen (TV screen, computer screen, movie screen).

So even good video games can be bad if young people are staying indoors and not going outdoors for exercise. Obesity is already a problem among many young people. And good video games can be bad if they take priority over responsibilities at home and schoolwork.

Parents should understand the potential dangers of video games and make sure they approve of the video games that come into their home. They may conclude that the drawbacks outweigh the benefits. If their children do play video games, they should also set time limits and monitor attitudes and behaviors that appear. They should also watch for signs of addiction. The dangers of video games are real, and parents need to exercise discernment.

Notes

1. National Institute on Media and the Family, “Expanded Game Reviews,” www.mediafamily.org/kidscore/games_gta4.shtml
2. Mona Charen, “Grand Theft Auto and us,” 5 August 2005, www.townhall.com/columnists/monacharen/mc20050805.shtml.
3. Steven Johnson, “Your Brain on Video Games,” Discover, July 2005, 40.
4. Ibid.
5. C. Shawn Green and Daphne Bavelier, “Action video game modifies visual selective attention,” Nature 423 (2003), 534-537.
6. Ibid., 536.
7. Jeffrey Goldsmith, “This is Your Brain on Tetris,” Wired, Issue 2.05, May 1994, 2.
8. Lori O’Keefe, “Media Exposure Feeding Children’s Violent Acts,” American Academy of Pediatrics News, January 2002.
9. Henry J. Kaiser Family Foundation, “Generation M: Media in the Lives of 8-18 Year Olds,” A Kaiser Family Foundation Study, March 2005.
10. Jeanne B. Funk, et. al. “An Evidence-Based Approach to Examining the Impact of Playing Violent Video and Computer Games,” Studies in Media and Information Literacy Education, Vol. 2, Issue 4 (November 2002), University of Toronto Press.
11. Craig Anderson, “Violent Video Games Increase Aggression and Violence,” U.S. Senate Testimony, Hearing on The Impact of Interactive Violence on Children, Committee on Commerce, Science and Transportation, 106th Congress, 1st Session.
12. David Grossman, On Killing: The Psychological Cost of Learning to Kill in War and Society (New York: Little, Brown and Co, 1995) and David Grossman and G. DeGaetano, Stop Teaching Our Kids to Kill: A Call to Action Against TV, Movie and Video Game Violence (New York: Crown Books, 1999).
13. Statement of Lieutenant Colonel Dave Grossman, given before the New York State Legislature, October 1999, www.fradical.com/statement_of_lieutenant_colonel_dave_Grossman.htm.
14. Ibid.
15. Johnson, Discover, 41.

© 2005 Probe Ministries