Monday, April 13, 2009

Society's IQ

Recently I've been thinking about intelligence, how to define it - or not - and the implications of being labeled on a scale of intelligence.  

Historically, humans have attempted to measure "intelligence" and aptitude for certain mental tasks by using tests for hundreds if not thousands of years.  Hierarchical societies often placed people in power based on birthright alone, but if they wanted to last for any significant period of time, they generally appointed "smart" people to actually guide the nation.  The Chinese Imperial Examination, probably the most famous pre-modern standardized test (to history geeks, anyway) lasted for 1300 years and saw China develop a highly skilled, highly specialized bureaucracy which guided the nation through centuries of prosperity, including the Song and Ming Dynasties.  In fact, the exam was so successful at producing expert civil servants that, when the Mongols invaded China in the 12th to 13th centuries A.D., they kept the bureaucratic system and the exam almost completely intact.  

The exam consisted of testing on various skills such as reading, writing, mathematics, archery, and riding which would have been expected of a young Chinese civil servant circa 1000 A.D.  One could say this was simply a test to ensure that a pupil had learned all of his lessons to satisfaction, not a true test of intelligence.  However, did not the Imperial Examination serve the same purposes that modern I.Q. tests serve today?  But I'm getting ahead of myself.

In 1903, a French psychologist by the name of Alfred Binet published a book, L'Etude experimentale de l'intelligence, or The Experimental Study of Intelligence, detailing his findings, as part of the Free Society for the Psychological Study of the Child, into the divisions between children of "normal intelligence" and "abnormal intelligence."  Binet intended his study to help place children with special learning needs in appropriate classrooms.  

In 1905, Binet, along with a research student named Theodore Simon produced a new variant of Binet's original exam and tested it on a small (50) group of French children who were identified by their teachers, people who interacted with them almost every day, as of average intelligence and competence.  Binet and Simon asked the children questions of varying degrees of difficulty, ranging from simple tasks like shaking hands, to complex questions involving creative thinking and inference like, "My neighbor has been receiving strange visitors. He has received in turn a doctor, a lawyer, and then a priest. What is taking place?"  A subject whose score on the test was completely average for their age would receive a score exactly corresponding to their age (such as 10 years old = 10.0).  Binet, as I have said, intended only for his test to assist in the placement of children in special education programs.

In the United States, variations of the Binet-Simon test were used for everything from advancing the cause of eugenics to classifying recruits for service and officer potential in WWI.  

Enough history, and back to the question I posed earlier when I said that one could claim that the Imperial Examination merely tested absorption of information, not intelligence: did the Chinese Imperial Examination serve the same purposes as modern I.Q. tests do today?  If so, do they both measure intelligence, or absorption of information?  

You see, I question the use of I.Q. tests.  I do not believe that a single number, or even a set of numbers, can accurately describe a person for all given situations.

Modern society, more so than ancient society, even Imperial China, values above almost all else simplicity and elegance of form.  This is evident in our obsession with standardization, our struggles with cultural pluralism, even our stylistic and design preferences.  "Simple" has become a buzz-word.  Hell, a popular and successful advertising campaign even centers around a big red button with the word "EASY" on it.

This obsession with the compact, the elegantly sparse, and the understated began right about the time that Alfred Binet was developing a test for kids ages 6 to 15.  In physics: James Clerk Maxwell's fusion of electricity and magnetism into the electromagnetic force, Albert Einstein's development of his famous law of energy-mass equivalence (in 1905, the same year as the Binet-Simon test's advent, no less), the explosion of Grand Unified Theories of Physics.  In fashion: a move away from the gaudy and lavish costumes of the 19th century towards the more plain and simple attire, including slips and evening gowns of the 1920s - still expensive and at times flamboyant, but nowhere near as detailed or wildly over-the-top as previous centuries.  In trade and foreign policy: the rise of globalisation, the decline of traditional national sovereignty, and the rise of international organisations.  In almost every field, the world has become simpler, and I.Q. scores, and the huge amount of importance which is placed upon them, is a manifestation of that trend.

I want to pause for a moment and ask, what does it mean to be human?  A question without an answer, both philosophically and biologically.  There is such a range within what is considered "human" that the definitions of that range cease even to exist.  Can humanity be identified by a single gene, a single strand of DNA, even a series of behavioural characteristics?  It is impossible to compress all the wonders of humanity, all the beautiful variation, the fractal-like similarity and scalability hand-in-hand with the distinct individuality of each being.  Within the fractal of humanity, there are an infinite number of variations - each a person, and, following my little fractal metaphor, it is just as impossible to compress all the wonders of a single human as it is all of humanity (whatever "humanity" means).

I.Q. tests compress humans into scores.  They define people within a single range, and though they may predict with a certain degree of accuracy how well a student will do in high school or even how much money they will make, there is enough evidence that these are influenced by factors closely associated to, but not part of, I.Q. scores to cast significant doubt on the situation.

Is it possible, though, that telling someone their I.Q. score, or even telling others, can influence perception of that person and therefore have a positive or negative effect on their life?  Is it at all fair (for lack of a better word) to afford a person with a higher I.Q. score more opportunities than a comparable person with a lower score?  Similarly, is it fair for a society to spend more money on a person with a lower than average I.Q. than on a person with a normal I.Q.?  What unintended implications can testing people for "intelligence" have?  Is moral?  Is it just?

And to think that Alfred Binet was only trying to help children find a classroom that suited their needs.

P.S.  While researching this post, I ran across an uncited Wikipedia mention of a Venetian meritocracy during the period of the Venetian Republic.  Apparently Venice used a "points system" to determine who was on the oligarchical ruling council in a given year.

Saturday, February 28, 2009

Why nuclear deterrence is not a good thing, explained in a very roundabout way.

Since time immemorial, when human conflict has been involved, the guy with the bigger stick/rock/sling/army/gun has usually commanded a great deal of say in the matter.  If the nation next door to you had a bigger, more powerful army, you definitely thought twice about attacking them.  This has been one of the underlying principles of warfare for millenia; the principle of deterrence.  Modern military strategists can be forgiven, then, when they think that deterrence also applies to modern day nuclear conflicts.

Another thing that must be taken into account when one is discussing conflict is the notion of state warfare.  Since the Peace of Westphalia in 1648, the nationstate, that is a sovereign area of people (generally, but not necessarily of the same ethnic background) with a common government and set of governing principles (also known as laws), has been the most common belligerent in conflicts.  Even before the formal defining of states at Westphalia, however, proto-nationstates such as tribes, clans, kingdoms, empires, and principalities had been warring for centuries.  Each one was quite clearly defined, and often marched into battle with distinguishing marks or banners to distinguish between the different sides.  But what does this have to do with deterrence?

Well, quite a lot, actually.  Deterrence only works if there is a clearly defined set of people who is being deterred from doing something, presumably attacking.  Today, conflicts are nowhere nearly so well-defined as they were when the standard of sovereignty was set at Westphalia or when Europe was divvied up following the Napoleonic Wars at the Congress of Vienna.  If one nationstate attacked another, you knew who it was you were attacking, and you knew who it was you were defending.

When was the shift from nationstates as belligerents to non-state actors as belligerents made, though?  That question is hard to answer.  Many modern wars are waged against terrorist groups or other non-state groups without a defined territory or a defined citizenry.

This makes them impossible to deter, at least with nuclear weapons.  Here's where I make the obligatory reference to MAD - Mutually Assured Destruction.  In the Cold War, NATO and the USSR stockpiled nuclear weapons and had them ready at a minute's notice to ensure that if the other ever even thought about pressing that red button, they would be bombed to smithereens.  Essentially, it boils down to "bomb them before they bomb you."  Nuclear deterrence is perhaps the ultimate form of deterrence, because almost nothing can stop a nuclear weapon, and if one country does anything at all to provoke a nuclear-armed state, then that country can expect Hell to rain down upon their heads.

This all changes if one of the belligerents is not a state, though.  If a terrorist organization was ever in a position to obtain and launch a nuclear weapon against a country, it could do so with effective impunity.  Terrorist organizations do not have territory, and, depending on the delivery method of the warhead, they could make it impossible to mount an effective counterattack.  Any retaliation with a nuclear weapon against a terrorist organization would have prohibitively high civilian casualty rates and would draw an unacceptable amount of flak from other countries (and rightly so!).  Any non-domestic and non-nuclear retaliation would involve the potential forceful violation of sovereignty (read: invasion) of another state, which would not be received well, either.  Furthermore, in the event of an invasion or other form of retaliation, the group responsible for the attack could simply up and move to another location - the beauty of not being constrained to a particular territory.

Deterrence may have worked during the Cold War, in fact, it may have saved millions of lives during the Cold War, but the fact is that it is simply an outdated idea.  The concept of having more than a very small strategic reserve of nuclear weapons - if any - is absolutely absurd.  It is simply  billions of dollars that could be spent on measures to make sure a retaliatory strike is never necessary.

(Note: This is not a true examination of nuclear deterrence strategy.  Rather, I just assume that if you want to know about that fascinating topic, you'll read the relevant Wikipedia entries, and then continue to explain why nuclear deterrence won't work in modern conflicts.  I might write a post on Cold War-era Nuclear Deterrence in the future, but for the moment, I think it's been pretty well covered.)

Thursday, January 29, 2009

Chemicals are fun!

Some hill in the Jemez. I like the clouds.


A chile light on the Christmas tree.


Some of those tiny yellow berries that they tell you not to eat.  At this magnification, they look like tomatoes.

Monday, January 26, 2009

Death from above: the responsibility to protect

The question of humanitarian intervention will probably be the defining issue in international politics in this century. It is an issue that is relatively new to the stage of international affairs, and it poses a difficult trade-off: should a nation intervene in another's affairs when innocent civilians are dying? Or should national sovereignty – the integrity of national borders, powers, and identity – be upheld as the supreme law of the land, and at any cost? Historically, humanitarian intervention has been a “damned if you do, damned if you don't” situation.

In Rwanda, in 1994, the nations of the world abstained from intervening until they could ignore the humanitarian outcry no longer, but were accused of offering too little, too late, and hundreds of thousands of Rwandans died in brutal massacres. In Kosovo, in 1999, the North Atlantic Treaty Organization (NATO), recalling the atrocities of Rwanda and loathe to repeat the same mistakes as had been made five years previous, intervened militarily against Yugoslavia with the stated goal of driving Serbian forces out of Kosovo, installing international peacekeepers, and returning refugees to their homes. However, the United Nations Security Council did not sanction the NATO bombing and ground campaign and many other questions remained as to the extent to which peaceful solutions were explored before violent action was taken. In East Timor, in 1999, an intervention was sanctioned by the UN, but many had already died by the time peacekeeping forces arrived.

Although many deride humanitarian intervention as a cynical method of fulfilling a nation's political aims, by definition “humanitarian intervention” has a just cause – the preservation of human life. The standard upon which all so-called “humanitarian” interventions must be tried is the Universal Declaration of Human Rights (UDHR) of 1948, a foundational document of the United Nations, and to this day one of the most important documents in international relations. The rights enumerated within the UDHR are accepted as the global standard by the 194 member and observer states of the United Nations. Indeed, in the Proclamation of Tehran, the International Conference on Human Rights declared that “the Universal Declaration of Human Rights … constitutes an obligation for the members of the international community.” The most powerful court in the world, the International Criminal Court (ICC) of the United Nations, uses the UDHR as a guide to indict individuals for war crimes and “crimes against humanity." According to Article 3 of the Declaration, “everyone has the right to life, liberty and security of person.” Therefore, all nations are obligated to respect that right to life, liberty, and security of person stipulated to in the UDHR. This forms the foundation on which the doctrine of the responsibility to protect is based. To ignore the responsibility to protect and to hide one's crimes behind the shield of national sovereignty is just as cynical, if not more so, than the act of intervention with political motives.

However, the decision whether or not to endorse interventions as “humanitarian” remains the purview of the United Nations Security Council. Due to the unique rules and procedures of that body, one country can often derail an entire proposal to intervene on the side of civilians that is otherwise in complete agreement with the Universal Declaration of Human Rights and all other international laws. This leads to watered-down and untimely responses from the United Nations, the only international body that, due to its near-universal membership, can confer “true” legitimacy on an intervention from the perspective of all nations. In effect, this forces a decision to be made between two undesirable outcomes: a single nation, or a coalition of nations, can intervene without UN approval, risking international prosecution and censure (Kosovo, Iraq); or nations can do nothing, which is widely criticized by the press and the citizenry, as innocents die (Rwanda, Darfur). In both situations, the outcome reflects badly on both the UN (which looks either impotent or evil) and the doctrine of humanitarian intervention (which looks either opposed to international law and democracy or useless in the face of ruthless dictators/rebels/genocidaires). A majority of the bad reputation that humanitarian intervention has, then, is unearned; it is not the doctrine itself (which is guided by the noble ideal of protecting human life), nor the United Nations as an organization (which is left in the unenviable position of being the impartial mediator) that causes the complaints leveled against humanitarian intervention.  It is the member nations themselves who use both the organization and the ideal as an excuse and a scapegoat.

Opponents of the responsibility to protect love to point to the 2003 invasion of Iraq as an example of the cynical and neoimperialistic ways that nations have employed humanitarian interventions. In many ways, it is exactly that. Iraq was a miserable failure of diplomacy and intelligence. However, it is not a sign that the concept of humanitarian intervention is wrong, or that the responsibility to protect is in any way invalid. If anything, Iraq is a reminder that we must improve our systems of international law, that we must put more faith in diplomacy and peaceful resolution of conflict, and that if all else fails, then (and only then) we should intervene militarily.

There is no agreed-upon specific definition of national sovereignty, but a general definition is “the international independence of a state, combined with the right and power of regulating its internal affairs without foreign dictation."  Although national or external sovereignty is regarded as a high law of international affairs, there is no law more supreme than the right to life enshrined in the Universal Declaration of Human Rights. Any nation that commits an offense by breaking the provisions of the UDHR forfeits its national sovereignty, and should be subject to judgment according to international laws such as the Rome Statute and the Geneva Conventions. However, not all nations accept international laws and treaties such as these. In these situations, the use of humanitarian intervention is warranted, and if massacres, atrocities, or genocides are likely to occur, then intervention is not only warranted, it is morally requisite.

No nation should be allowed to use the pursuance of peace as an excuse to invade another sovereign state, but neither should any country be allowed to use sovereignty as an excuse to slaughter civilians. Humanitarian intervention and the responsibility to protect are protections against the latter situation. They are part of an ideal: the separation of politics and humanity. Human life should be respected with the utmost solemnity and protected with the greatest fervor. It is not a question of whether or not humanitarian intervention and the responsibility to protect as whole doctrines are valid; it is a question of when we must use them. They are not perfect, but they are infinitely better than the alternative: the deaths of thousands, perhaps millions of innocents.

[Yay for multi-use articles!  Originally, this was a paper for an English class, but I think that it works just as well, if not better, as a blog post.]

Saturday, January 10, 2009

Corruption, cynicism, and corporations

With the recent developments in the "corruption" cases of Bill Richardson, Rod Blagojevich, Manny Aragon, and many, many other political figures, I think it's high time to ask "should we really elect someone if they want to be elected?"

If that seems completely backwards, it's because it is, in comparison with the present way of doing things.  Right now, people have to stand for office.  Well, if someone wants the job, then it's fairly safe to say that they want it, at least in part, for some personal gain.  After all, the cynical part in our society is always quick to deride the politician as self-serving scum, and current events have done nothing but reinforce that notion.

If, then, all candidates for elected office who put themselves forward can be dismissed as too vested in the outcome and the powers of the office, how do we select candidates?

At first glance, one might think that a sort of middle-school-esque "nomination system" might work.  That is, until one realises that this isn't middle school and that open nominations encourage just as much, if not more, corruption and inside dealing as self-nomination.  The only advantage I see to nominating people is that we know who their allies will be.

No, I believe that we have the best system for nominations as we can get right now.  The best way to decrease the amount of corruption in politics is to provide for more oversight.  "What if the overseers are corrupt?"  Well, we'll just have to trust to the law of averages that if we have a large enough oversight board, at least one person on it will not be corrupt.  Now that is cynicism.  Trusting the fidelity of a nation's political system not to people, but to a statistical law.