Monday, April 13, 2009

Society's IQ

Recently I've been thinking about intelligence, how to define it - or not - and the implications of being labeled on a scale of intelligence.  

Historically, humans have attempted to measure "intelligence" and aptitude for certain mental tasks by using tests for hundreds if not thousands of years.  Hierarchical societies often placed people in power based on birthright alone, but if they wanted to last for any significant period of time, they generally appointed "smart" people to actually guide the nation.  The Chinese Imperial Examination, probably the most famous pre-modern standardized test (to history geeks, anyway) lasted for 1300 years and saw China develop a highly skilled, highly specialized bureaucracy which guided the nation through centuries of prosperity, including the Song and Ming Dynasties.  In fact, the exam was so successful at producing expert civil servants that, when the Mongols invaded China in the 12th to 13th centuries A.D., they kept the bureaucratic system and the exam almost completely intact.  

The exam consisted of testing on various skills such as reading, writing, mathematics, archery, and riding which would have been expected of a young Chinese civil servant circa 1000 A.D.  One could say this was simply a test to ensure that a pupil had learned all of his lessons to satisfaction, not a true test of intelligence.  However, did not the Imperial Examination serve the same purposes that modern I.Q. tests serve today?  But I'm getting ahead of myself.

In 1903, a French psychologist by the name of Alfred Binet published a book, L'Etude experimentale de l'intelligence, or The Experimental Study of Intelligence, detailing his findings, as part of the Free Society for the Psychological Study of the Child, into the divisions between children of "normal intelligence" and "abnormal intelligence."  Binet intended his study to help place children with special learning needs in appropriate classrooms.  

In 1905, Binet, along with a research student named Theodore Simon produced a new variant of Binet's original exam and tested it on a small (50) group of French children who were identified by their teachers, people who interacted with them almost every day, as of average intelligence and competence.  Binet and Simon asked the children questions of varying degrees of difficulty, ranging from simple tasks like shaking hands, to complex questions involving creative thinking and inference like, "My neighbor has been receiving strange visitors. He has received in turn a doctor, a lawyer, and then a priest. What is taking place?"  A subject whose score on the test was completely average for their age would receive a score exactly corresponding to their age (such as 10 years old = 10.0).  Binet, as I have said, intended only for his test to assist in the placement of children in special education programs.

In the United States, variations of the Binet-Simon test were used for everything from advancing the cause of eugenics to classifying recruits for service and officer potential in WWI.  

Enough history, and back to the question I posed earlier when I said that one could claim that the Imperial Examination merely tested absorption of information, not intelligence: did the Chinese Imperial Examination serve the same purposes as modern I.Q. tests do today?  If so, do they both measure intelligence, or absorption of information?  

You see, I question the use of I.Q. tests.  I do not believe that a single number, or even a set of numbers, can accurately describe a person for all given situations.

Modern society, more so than ancient society, even Imperial China, values above almost all else simplicity and elegance of form.  This is evident in our obsession with standardization, our struggles with cultural pluralism, even our stylistic and design preferences.  "Simple" has become a buzz-word.  Hell, a popular and successful advertising campaign even centers around a big red button with the word "EASY" on it.

This obsession with the compact, the elegantly sparse, and the understated began right about the time that Alfred Binet was developing a test for kids ages 6 to 15.  In physics: James Clerk Maxwell's fusion of electricity and magnetism into the electromagnetic force, Albert Einstein's development of his famous law of energy-mass equivalence (in 1905, the same year as the Binet-Simon test's advent, no less), the explosion of Grand Unified Theories of Physics.  In fashion: a move away from the gaudy and lavish costumes of the 19th century towards the more plain and simple attire, including slips and evening gowns of the 1920s - still expensive and at times flamboyant, but nowhere near as detailed or wildly over-the-top as previous centuries.  In trade and foreign policy: the rise of globalisation, the decline of traditional national sovereignty, and the rise of international organisations.  In almost every field, the world has become simpler, and I.Q. scores, and the huge amount of importance which is placed upon them, is a manifestation of that trend.

I want to pause for a moment and ask, what does it mean to be human?  A question without an answer, both philosophically and biologically.  There is such a range within what is considered "human" that the definitions of that range cease even to exist.  Can humanity be identified by a single gene, a single strand of DNA, even a series of behavioural characteristics?  It is impossible to compress all the wonders of humanity, all the beautiful variation, the fractal-like similarity and scalability hand-in-hand with the distinct individuality of each being.  Within the fractal of humanity, there are an infinite number of variations - each a person, and, following my little fractal metaphor, it is just as impossible to compress all the wonders of a single human as it is all of humanity (whatever "humanity" means).

I.Q. tests compress humans into scores.  They define people within a single range, and though they may predict with a certain degree of accuracy how well a student will do in high school or even how much money they will make, there is enough evidence that these are influenced by factors closely associated to, but not part of, I.Q. scores to cast significant doubt on the situation.

Is it possible, though, that telling someone their I.Q. score, or even telling others, can influence perception of that person and therefore have a positive or negative effect on their life?  Is it at all fair (for lack of a better word) to afford a person with a higher I.Q. score more opportunities than a comparable person with a lower score?  Similarly, is it fair for a society to spend more money on a person with a lower than average I.Q. than on a person with a normal I.Q.?  What unintended implications can testing people for "intelligence" have?  Is moral?  Is it just?

And to think that Alfred Binet was only trying to help children find a classroom that suited their needs.

P.S.  While researching this post, I ran across an uncited Wikipedia mention of a Venetian meritocracy during the period of the Venetian Republic.  Apparently Venice used a "points system" to determine who was on the oligarchical ruling council in a given year.

Saturday, February 28, 2009

Why nuclear deterrence is not a good thing, explained in a very roundabout way.

Since time immemorial, when human conflict has been involved, the guy with the bigger stick/rock/sling/army/gun has usually commanded a great deal of say in the matter.  If the nation next door to you had a bigger, more powerful army, you definitely thought twice about attacking them.  This has been one of the underlying principles of warfare for millenia; the principle of deterrence.  Modern military strategists can be forgiven, then, when they think that deterrence also applies to modern day nuclear conflicts.

Another thing that must be taken into account when one is discussing conflict is the notion of state warfare.  Since the Peace of Westphalia in 1648, the nationstate, that is a sovereign area of people (generally, but not necessarily of the same ethnic background) with a common government and set of governing principles (also known as laws), has been the most common belligerent in conflicts.  Even before the formal defining of states at Westphalia, however, proto-nationstates such as tribes, clans, kingdoms, empires, and principalities had been warring for centuries.  Each one was quite clearly defined, and often marched into battle with distinguishing marks or banners to distinguish between the different sides.  But what does this have to do with deterrence?

Well, quite a lot, actually.  Deterrence only works if there is a clearly defined set of people who is being deterred from doing something, presumably attacking.  Today, conflicts are nowhere nearly so well-defined as they were when the standard of sovereignty was set at Westphalia or when Europe was divvied up following the Napoleonic Wars at the Congress of Vienna.  If one nationstate attacked another, you knew who it was you were attacking, and you knew who it was you were defending.

When was the shift from nationstates as belligerents to non-state actors as belligerents made, though?  That question is hard to answer.  Many modern wars are waged against terrorist groups or other non-state groups without a defined territory or a defined citizenry.

This makes them impossible to deter, at least with nuclear weapons.  Here's where I make the obligatory reference to MAD - Mutually Assured Destruction.  In the Cold War, NATO and the USSR stockpiled nuclear weapons and had them ready at a minute's notice to ensure that if the other ever even thought about pressing that red button, they would be bombed to smithereens.  Essentially, it boils down to "bomb them before they bomb you."  Nuclear deterrence is perhaps the ultimate form of deterrence, because almost nothing can stop a nuclear weapon, and if one country does anything at all to provoke a nuclear-armed state, then that country can expect Hell to rain down upon their heads.

This all changes if one of the belligerents is not a state, though.  If a terrorist organization was ever in a position to obtain and launch a nuclear weapon against a country, it could do so with effective impunity.  Terrorist organizations do not have territory, and, depending on the delivery method of the warhead, they could make it impossible to mount an effective counterattack.  Any retaliation with a nuclear weapon against a terrorist organization would have prohibitively high civilian casualty rates and would draw an unacceptable amount of flak from other countries (and rightly so!).  Any non-domestic and non-nuclear retaliation would involve the potential forceful violation of sovereignty (read: invasion) of another state, which would not be received well, either.  Furthermore, in the event of an invasion or other form of retaliation, the group responsible for the attack could simply up and move to another location - the beauty of not being constrained to a particular territory.

Deterrence may have worked during the Cold War, in fact, it may have saved millions of lives during the Cold War, but the fact is that it is simply an outdated idea.  The concept of having more than a very small strategic reserve of nuclear weapons - if any - is absolutely absurd.  It is simply  billions of dollars that could be spent on measures to make sure a retaliatory strike is never necessary.

(Note: This is not a true examination of nuclear deterrence strategy.  Rather, I just assume that if you want to know about that fascinating topic, you'll read the relevant Wikipedia entries, and then continue to explain why nuclear deterrence won't work in modern conflicts.  I might write a post on Cold War-era Nuclear Deterrence in the future, but for the moment, I think it's been pretty well covered.)

Thursday, January 29, 2009

Chemicals are fun!

Some hill in the Jemez. I like the clouds.


A chile light on the Christmas tree.


Some of those tiny yellow berries that they tell you not to eat.  At this magnification, they look like tomatoes.

Monday, January 26, 2009

Death from above: the responsibility to protect

The question of humanitarian intervention will probably be the defining issue in international politics in this century. It is an issue that is relatively new to the stage of international affairs, and it poses a difficult trade-off: should a nation intervene in another's affairs when innocent civilians are dying? Or should national sovereignty – the integrity of national borders, powers, and identity – be upheld as the supreme law of the land, and at any cost? Historically, humanitarian intervention has been a “damned if you do, damned if you don't” situation.

In Rwanda, in 1994, the nations of the world abstained from intervening until they could ignore the humanitarian outcry no longer, but were accused of offering too little, too late, and hundreds of thousands of Rwandans died in brutal massacres. In Kosovo, in 1999, the North Atlantic Treaty Organization (NATO), recalling the atrocities of Rwanda and loathe to repeat the same mistakes as had been made five years previous, intervened militarily against Yugoslavia with the stated goal of driving Serbian forces out of Kosovo, installing international peacekeepers, and returning refugees to their homes. However, the United Nations Security Council did not sanction the NATO bombing and ground campaign and many other questions remained as to the extent to which peaceful solutions were explored before violent action was taken. In East Timor, in 1999, an intervention was sanctioned by the UN, but many had already died by the time peacekeeping forces arrived.

Although many deride humanitarian intervention as a cynical method of fulfilling a nation's political aims, by definition “humanitarian intervention” has a just cause – the preservation of human life. The standard upon which all so-called “humanitarian” interventions must be tried is the Universal Declaration of Human Rights (UDHR) of 1948, a foundational document of the United Nations, and to this day one of the most important documents in international relations. The rights enumerated within the UDHR are accepted as the global standard by the 194 member and observer states of the United Nations. Indeed, in the Proclamation of Tehran, the International Conference on Human Rights declared that “the Universal Declaration of Human Rights … constitutes an obligation for the members of the international community.” The most powerful court in the world, the International Criminal Court (ICC) of the United Nations, uses the UDHR as a guide to indict individuals for war crimes and “crimes against humanity." According to Article 3 of the Declaration, “everyone has the right to life, liberty and security of person.” Therefore, all nations are obligated to respect that right to life, liberty, and security of person stipulated to in the UDHR. This forms the foundation on which the doctrine of the responsibility to protect is based. To ignore the responsibility to protect and to hide one's crimes behind the shield of national sovereignty is just as cynical, if not more so, than the act of intervention with political motives.

However, the decision whether or not to endorse interventions as “humanitarian” remains the purview of the United Nations Security Council. Due to the unique rules and procedures of that body, one country can often derail an entire proposal to intervene on the side of civilians that is otherwise in complete agreement with the Universal Declaration of Human Rights and all other international laws. This leads to watered-down and untimely responses from the United Nations, the only international body that, due to its near-universal membership, can confer “true” legitimacy on an intervention from the perspective of all nations. In effect, this forces a decision to be made between two undesirable outcomes: a single nation, or a coalition of nations, can intervene without UN approval, risking international prosecution and censure (Kosovo, Iraq); or nations can do nothing, which is widely criticized by the press and the citizenry, as innocents die (Rwanda, Darfur). In both situations, the outcome reflects badly on both the UN (which looks either impotent or evil) and the doctrine of humanitarian intervention (which looks either opposed to international law and democracy or useless in the face of ruthless dictators/rebels/genocidaires). A majority of the bad reputation that humanitarian intervention has, then, is unearned; it is not the doctrine itself (which is guided by the noble ideal of protecting human life), nor the United Nations as an organization (which is left in the unenviable position of being the impartial mediator) that causes the complaints leveled against humanitarian intervention.  It is the member nations themselves who use both the organization and the ideal as an excuse and a scapegoat.

Opponents of the responsibility to protect love to point to the 2003 invasion of Iraq as an example of the cynical and neoimperialistic ways that nations have employed humanitarian interventions. In many ways, it is exactly that. Iraq was a miserable failure of diplomacy and intelligence. However, it is not a sign that the concept of humanitarian intervention is wrong, or that the responsibility to protect is in any way invalid. If anything, Iraq is a reminder that we must improve our systems of international law, that we must put more faith in diplomacy and peaceful resolution of conflict, and that if all else fails, then (and only then) we should intervene militarily.

There is no agreed-upon specific definition of national sovereignty, but a general definition is “the international independence of a state, combined with the right and power of regulating its internal affairs without foreign dictation."  Although national or external sovereignty is regarded as a high law of international affairs, there is no law more supreme than the right to life enshrined in the Universal Declaration of Human Rights. Any nation that commits an offense by breaking the provisions of the UDHR forfeits its national sovereignty, and should be subject to judgment according to international laws such as the Rome Statute and the Geneva Conventions. However, not all nations accept international laws and treaties such as these. In these situations, the use of humanitarian intervention is warranted, and if massacres, atrocities, or genocides are likely to occur, then intervention is not only warranted, it is morally requisite.

No nation should be allowed to use the pursuance of peace as an excuse to invade another sovereign state, but neither should any country be allowed to use sovereignty as an excuse to slaughter civilians. Humanitarian intervention and the responsibility to protect are protections against the latter situation. They are part of an ideal: the separation of politics and humanity. Human life should be respected with the utmost solemnity and protected with the greatest fervor. It is not a question of whether or not humanitarian intervention and the responsibility to protect as whole doctrines are valid; it is a question of when we must use them. They are not perfect, but they are infinitely better than the alternative: the deaths of thousands, perhaps millions of innocents.

[Yay for multi-use articles!  Originally, this was a paper for an English class, but I think that it works just as well, if not better, as a blog post.]

Saturday, January 10, 2009

Corruption, cynicism, and corporations

With the recent developments in the "corruption" cases of Bill Richardson, Rod Blagojevich, Manny Aragon, and many, many other political figures, I think it's high time to ask "should we really elect someone if they want to be elected?"

If that seems completely backwards, it's because it is, in comparison with the present way of doing things.  Right now, people have to stand for office.  Well, if someone wants the job, then it's fairly safe to say that they want it, at least in part, for some personal gain.  After all, the cynical part in our society is always quick to deride the politician as self-serving scum, and current events have done nothing but reinforce that notion.

If, then, all candidates for elected office who put themselves forward can be dismissed as too vested in the outcome and the powers of the office, how do we select candidates?

At first glance, one might think that a sort of middle-school-esque "nomination system" might work.  That is, until one realises that this isn't middle school and that open nominations encourage just as much, if not more, corruption and inside dealing as self-nomination.  The only advantage I see to nominating people is that we know who their allies will be.

No, I believe that we have the best system for nominations as we can get right now.  The best way to decrease the amount of corruption in politics is to provide for more oversight.  "What if the overseers are corrupt?"  Well, we'll just have to trust to the law of averages that if we have a large enough oversight board, at least one person on it will not be corrupt.  Now that is cynicism.  Trusting the fidelity of a nation's political system not to people, but to a statistical law.

Sunday, December 21, 2008

The death of PC gaming and the rise of the netbook

WARNING: The following post may contain technical terms that you will have to Google.  If you think your computer is run by magic faeries and a pinch of gold dust, you might want to skip this one.

I love games.  I play both computer (Windows PC) and video games (Wii and PS3), and I have done ever since my family had a computer (1996 or thereabouts).  However, there is a definite trend in the games industry away from computers, and towards console games.  To me, this seems a bit odd; most people already own computers, many computers are as powerful, if not more so, than current-generation consoles (especially the Wii), and wouldn't you think it would be cheaper and easier to consolidate one's games into one system, the computer?

Console game sales far outpace sales of computer games.  The Entertainment Software Association, or ESA, quotes statistics that support this claim: in 2007, console game sales accounted for $8.64 billion, while computer games accounted for a comparatively measly $910 million.

I think that there are a few reasons for this decline in PC gaming vis-à-vis console gaming.  The first is that dreaded and sick thing known as "Recommended System Requirements."

I remember trying to convince my parents to let me get a console when I was 12.  My main argument centered around the statement "there are no system requirements!  It always runs!"  This is my biggest beef with computer games, and I think it may have led to the downfall of the platform.  Console games, besides the fairly obvious "this game runs on this system" sticker on the front cover, have no system requirements.  It's either on such-and-such a system (and at full quality), or it's not.  

PC games, in contrast, come with a rather cryptic message on the bottom of the box that has a list of the "minimum requirements" (ie whether or not you can load the menu screen) and the "recommended requirements" (ie whether or not you can actually run the game).  You have to actually know what kind of processor you have, how much RAM you have, what video card you have, and what speeds they all run at.  You have to actually know how your computer works!  Oh noes!

In my experience with computer games, I have bought a game, taken it home, and installed it, only to find that that it either doesn't run at all or that it runs at such low quality as to make it virtually unplayable countless times.

Console games?  No problem.  Go to the store, buy the game that says "I run on your console!", pop the disc in, and bingo!, you're playing within five minutes.  No long installs (well, not until recently, anyway), no system requirements, no pain.

The second reason that I believe computer games are on their way out has to do with hardware.  Video game consoles have pretty much the same hardware, no matter what version of a platform you buy (yes, there are occasional updates, but for the most part they consist of nothing more than a new DVD drive, or a slightly faster processor).  PCs, on the other hand, are constantly being updated.  New graphics cards (arguably the single most important factor in how a computer runs a game) come out almost monthly; Intel and AMD (the two major processor manufacturers) release new processors regularly; new standards of RAM (random access memory, the stuff that stores data temporarily, as opposed to the hard drive, which stores things permanently) come out yearly.  The only thing that doesn't seem to change constantly in a computer is the hard drive.

Like all Apple products, PCs are obsolete almost the moment you buy them.  Because of the lack of a standard of hardware - a benchmark PC - games can, and do, vary over the entire spectrum of system requirements.  The prime example of this is a game called Crysis (pronounced like crisis).  Gamers love to poke fun at Crysis (in the way that school kids poke fun at the bully, but are secretly afraid and awed by him).  Crysis is consistently described as about two years ahead of current hardware and continues, a year after its release, to be the golden standard of extreme graphics on the PC, an amazing feat.

However, I have never seen a PC that could run Crysis at the highest settings.  They do exist, but they cost $5000 and up.

Who wants to constantly have to burn money on a computer to make sure that it can run the latest games?  Not I.  And, it seems, not the 38% of American households that own a video games console.

The third reason that "hardcore" PC games are going to die (which finally explains the title of this post) is that the world is transitioning away from big, powerful computers to small, portable, less powerful computers.  The "netbook" is a term coined fairly recently for the new category of computers with low-power processors and screens under 12" across.

I am typing this on a laptop, and although it's a huge 17" desktop replacement, the very fact that I own a laptop and not a more powerful desktop is an admission to the fact that I value portability more than power.

It is no coincidence that a "niche product" like the netbook exploded into the mainstream in the biggest year for video game console sales ever.  This is the point at which computers and games go their separate ways.  Video game consoles have and will continue to evolve into sophisticated multimedia centers, with games at their cores, while computers will evolve into more portable devices that center around interaction via the internet.

Indeed, some netbooks already blur the lines between the internet and a desktop environment.  Google recently announced a project, called Native Client, that would run x86 code (the types of programs that you run on your computer) inside your browser, which is presently limited to things like Flash or JavaScript.

Some may argue that games like World of Warcraft, which have an exclusively PC (and Mac) audience and are extremely popular, disprove this theory.  As much as I am inclined to dismiss these people as n00bs, they have a point.  WoW and other Massively Multiplayer Online Games (MMOs) have a tremendous audience on the computer.  However, new MMOs, like Bioware's the Old Republic, a Star Wars-themed MMO which has received a huge amount of hype, will be available on consoles as well as computers.  As internet connection speeds increase and MMOs become more hardware-intensive, the limiting factor of the graphics of PC MMOs like WoW will cease to be connection bandwidth and will become the actual video hardware of the machines they are played on.

Computers and consoles are headed in fundamentally different directions and only one can take gaming with it.  At the moment, it would appear that consoles have it mostly wrapped up.  Of course, what does it matter to Microsoft and Sony, the companies that make both our computers and our consoles if we have to buy one of each?

Friday, December 5, 2008

Metricate me, Cap'n!

To avoid any awkward confusion upfront, "metrication" is defined as "rewriting something in such a way that it is indecipherable to Americans" "the act, process, or result of establishing the metric system as the standard system of measurement."

The United States has officially recognized and endorsed the use of the metric system (officially the International System of Units - Le Système International d'Unités - or SI) since 1866.  However, it is one of three countries in the world that has not adopted it as its primary system of measurement (the other two are Liberia - a former US colony - and Myanmar).

This is not a post about how great the metric system is (very great), or why the metric system is better than the conventional system (it just is), or even how stupid the US is for refusing to adopt such a common sense series of units (quite stupid).  No, this post is none of these things because all of these things have been written about ad nauseam.

Essentially, the United States has refused to switch to the metric system because of a myriad of political and cultural reasons.  It is the only developed country in the world that has continued to use conventional units (with the quasi-exception of the UK), and most people tend to believe that metric units will continue to be used only in academia and technical fields like robotics and engineering.  However, I believe that the US could, and will, switch much faster, and much sooner, than is presently predicted.  And all because of the Internet.

North America has the highest percentage of Internet penetration in the world (73%); the United States alone has about 220 million Internet users.  Internet culture has blended so much with American culture that it is not uncommon to hear Internet expressions like "lol" or "1337" (pronounced "leet" and short for "elite" for all you non-1337 h4x0rz out there) used in everyday verbal conversations.  The Internet also uses metric.

Think about it; the hard drive in the computer you are reading this on is measured in gigabytes, the SI prefix giga, meaning "one billion," bytes.  My Internet connection is measured in Mbps - megabits per second (mega being the SI prefix for "one million").  The resolution of the photos you uploaded to Facebook the other day are measured in megapixels.  You use the metric system every day on a computer and on the Internet.

The international nature of the Internet also contributes to the metric influence.  Since so many (metric) countries are represented on the Internet, and since the Internet hosts content from all of them, it is inevitable that if one spends enough time on the Internet, one will encounter the metric system.

It is this subtle infiltration of America by the metric system that I believe will ultimately lead to a United States in line (literally) with the rest of the world.  90% of US residents aged 18-29 use the Internet, and so the metric system has finally learned what the Catholic Church has long known - "get 'em while they're young."  I foresee a kind of Glorious Revolution in which the metric system is finally introduced, in policy, as the primary system of measurement for the United States by the maturing Internet-age of Americans - those born 1992 (the birth of the World Wide Web) and later.

Gone will be the Carter-era pamphlets on "metrication" and a "metric future," to be replaced by...well, nothing.  We don't need propaganda to convince us to use the metric system, we already use it voluntarily with our computers and the Internet.  The Internet has brought the world together, and has (recently) begun the process of standardizing a compendium of knowledge and experience (including a system of measurement) that transcends national boundaries.  As high technology becomes more and more integrated within our culture, the metric system, the measurement scheme of high tech, will become integrated, as well.

Fear not the revolution, for it has already come.

P.S. Commenter Dr. Detroit makes a very good point, and I am reproducing part of his comment here: 
"There were many starts and fits in the direction of the metric system in the US since the fateful year of 1866 when it became legal throughout the land. A toxic combination of business lobbies (it's too expensive!), undereducated patriots (it's un-Amerikun!), and sheer inertia (imperial works, why bother?) has killed off any serious attempts at conversion several times. Although people may know their kilo-, mega-, and giga- prefixes thanks to PC's and the Internet, we still live in an America of 21-inch monitors, 3.5-inch hard drive bays, and hard-drive densities measured in bits/square inch." 
It is, sadly, true that high technology, because much of it is developed in America, is subject to the awkward dual use of metric and customary units.  I would concede the point that many measurements of high-tech devices are still measured in inches and other imperial units.  However, it is conceivable that we are in the first stages of a metric transformation that could be graphed as a parabola, that is, a transformation that starts slowly but builds upon itself to become a huge and significant force within a short period of time.  As the generations that never learned the metric system age (or as my mom so eloquently put it to me, "when I die"), the metric system may gain ground at an exponential rate (hence the parabola comparison).  Already we are seeing some small glimmers of hope as computer manufacturers have completely rejected the idea of using fractions of an inch in screen size measurements, in favor of the decimal system (ie .1, .2, .3, etc.) which is used by the SI.

Saturday, November 29, 2008

Consensus and the UN

Today, the United Nations is obsessed with consensus.  The ultimate goal of every debate on every issue is to reach a consensus.  Member nations have even gone so far as to include the word consensus in caucus names like "Uniting for Consensus" (which, admittedly, sounds quite a lot better than the caucus's former name "The Coffee Club").  Why do we lust after consensus, though?

Obviously, a solution that is agreeable to everyone is best, right?

Consider the following situation: nation A (let's call them Athens), nation B (Sparta), and nation C (Troy) all meet to try to come to an agreement on human rights.  Athens is a shining beacon of democracy in the world, they even like to spread the fantastic-ness of democracy to other countries.  Athens supports humanitarian intervention and human rights around the world.  Sparta, on the other hand, has a bit of a problem with a minority within their territory, a problem that the Spartans have decided to deal with by denying every human right to these citizens and initiating a campaign of ethnic cleansing.  The Trojans are the moderates: they support human rights, but not at the cost of national sovereignty.

Should Athens, Sparta, and Troy attempt to agree by consensus, anything they pass must meet only the lowest common denominator of the three doctrines on human rights and intervention.  Therefore, any resolution that the three pass will be completely ineffectual in practice, but will allow Sparta to continue killing their own citizens; Athens can proclaim a victory in the monumental passage of a resolution on human rights by consensus (every UN diplomat gets excited by that little buzzword); and Troy can claim to be the moderator, the calm and sage-like arbiter from whose fertile mind this consensus sprang.

Everyone wins!  (Well, except the Spartan minorities.)

This is the danger of consensus.  If everyone agrees, there is probably something very wrong, especially in an organization like the United Nations, which, by design, includes almost every possible viewpoint on almost every possible subject.

In fact, the UN Security Council veto power, an established power of the permanent five members of the Council that I am very much against, was established so that decisions could, and would, be made without consensus.  Unfortunately, the founders of the UN didn't foresee the P-5 being the very nations pushing for consensus and ignoring the plights of others.  Democracy is a great thing, but when it is in almost everyone's best interests to ignore a problem, or even worse, when they are encouraged to ignore a particular problem in order to reach the diplomatic Eden that is consensus, people suffer.

I've been trying to write a coherent post about the UN for a few days now, and I will probably write more on it in the future.  The UN would make great fiction - a struggle for international governance, rather than regional governance; the problems and conflicts that arise when governing a whole planet; even expansion to other planets.  I think the problem with people who like the UN (myself among them), is that we just can't comprehend why other people wouldn't want what we want - a more peaceful world(s) with better leaders and better living for all.  Maybe it's something about that whole having an omnipotent being/organization above you.  God doesn't seem to want to share, I suppose.

Friday, November 14, 2008

Musings on the World Wars

This past Saturday on KUNM, Radio Theatre played a PRX piece by Marjorie Van Halteren about the World Wars and the War in Iraq from the perspective of an American living in France.  In Europe, finding unexploded bombs from the pre-1945 era is commonplace, and there are special forces in both Germany and France that do nothing but round up and destroy these still-live explosives.  In fact, several years ago my paternal grandmother, who lived then in Leatherhead, England, called us up to tell us that an unexploded firebomb had been found under the floorboards of a shop in the town.

As Americans we never have to deal with the after effects of wars - especially those wars in which we were a combatant.  Since WWI, only one battle has been fought on US soil and none have been fought in the contiguous 48.  Finding a bomb in your back garden must really bring home the reality of a war that ended nearly 110 years ago.

Speaking of bringing things back to life, Kelsey has reposted a very interesting set of pictures from the First World War.  These photos are nothing special in terms of composition or subject matter - they depict soldiers in typical WWI uniform standing in trenches or sitting around - but they are in color, as almost no other photos of the Great War are.

I showed these pictures to my photo teacher, who asked if they had been hand-colored.  I don't think they have been, judging by the accuracy and the detail of the work, but I looked it up anyway.  As it turns out, a method of color photography was developed in 1907, just seven years before the outbreak of hostilities in Europe - autochrome.

The pictures, like the bombs and the radio piece, bring the World Wars slightly closer, subjectively, to modern times.  We can identify more easily with a color photograph than we can with a black and white one.  Perhaps things like these will help us avoid such a war in the future.  Maybe we will never have a war so horrible, so bloody, that it can be described by no other name than simply, "the Great War."  

Only if we remember these artifacts, these photos, these stories.

Sunday, November 9, 2008

Post-election politics, part II of several: Isn't this a bit premature?

I can't believe this.

Maybe I should have foreseen massive resistance to the election of the first black President in history, but I never thought people would be advocating impeachment before he's even taken office.

Two things about this bother me more than anything else: 
1) Impeachment is a serious thing.  It should not be thrown about or used to enforce "family values," lest it degenerate into something that the Congress is loathe to touch when it is actually needed.  I don't think we would be seeing this had McCain been elected.  Yes, we would see protest and anger, but not calls for legal impeachment.
2) This man is our President-Elect in a time that is perhaps the darkest in America's history since WWII or the Great Depression.  Obama hasn't even had a chance to do anything yet, and I think that when he is tested, he will show the world that he is the President that America needs.
[/partisanship]

Friday, November 7, 2008

Post-election politics, part I of several: Parties

This series of posts, "Re-examining the Election," will be my "super-post" about the election that I promised.  Tonight I will focus on the two party system that we have had for generations here in America.  George Washington warned against parties, but James Madison saw them as essential to a new type of democracy.  All democracies in the world (that I know of) have some sort of party system, but is there a better way?

Last night I was chatting with Kelsey of Plastick Manzikert and he brought up the religious right faction of the Republican Party and how he thought that this election would be the death knell of the religious right.

That would be great if it were true.

Overall, I don't think the Republican Party membership agrees with the religious right - at least, not the Ron Paul-esque factions of the party - but what ends up happening in elections is that religious right candidates and leaders get into positions of power within the party when they would not generally be accepted by a majority of party members.

This got me thinking - if party platforms were less catch-all and more specific, then we would have a more diverse set of parties to choose from and thus we would "weed out" the extremist factions from "mainstream" parties.  This is the kind of thing European governments do.

To which, of course, Kelsey replied "What about the FN in France [and the BNP and the UK]?  They still get 10% of the vote.  It means no power, but they are scary as heck."

Yes, diversifying and specifying does mean that we get some really scary parties out there, but it also means that they are isolated and can't worm their ways into mainstream, catch-all party leadership and run the country from there (think Karl Rove and Dick Cheney).  Not without a coalition, anyway.

OK, so smaller parties are better, but two major problems remain: 1) How do we change the election system to promote more parties?  After all, there is nothing in the Constitution that limits us to two parties (or mandates parties at all), so we must have arrived here somehow.  2) How do we deal with the election of an executive when no party receives an absolute majority of the vote (or something close to it)?

There is a principle in political science called Duverger's Law which states that an electoral system based on district plurality (like the electoral college, where the winner of a state gets all the electoral votes, no matter the margin of victory) or single-member district plurality (where specific districts individually elect their own representatives to a legislative body) will favor two parties or factions in the system.

Since I can't explain it any better, let me just give you the example from the Wikipedia page on Duverger's Law: 
Duverger suggested an election in which 100,000 moderate voters and 80,000 radical voters are voting for a single official. If two moderate candidates and one radical candidate were to run, the radical candidate would win unless one of the moderate candidates gathered fewer than 20,000 votes. Observing this, moderate voters would be more likely to vote for the candidate most likely to gain more votes, with the goal of defeating the radical candidate. Either the two parties must merge, or one moderate party must fail, as the voters gravitate to the two strong parties, a trend Duverger called polarization.

In addition to this effect of polarisation, Duverger also pointed out a purely statistical problem with single-member district plurality (SMDP): if a statistically significant third party is spread out over several districts, then no single district has enough support for that party to elect a representative from it.  This problem can be solved, but can also be created, with gerrymandering, the oft-criticized act of redistributing and redistricting to benefit or hinder on party or group of voters.

Now before you go around thinking that SMDP leads only to two parties, I have to point out that many successful multi-party democracies have SMDP systems; India, Canada, and the UK all have more than two statistically significant and politically significant parties.  The US, however, doesn't.

In every election since 1980, no third part candidate has received more than 3% of the popular vote, with the exception of Ross Perot, who received 18.9% of the vote in 1992 and 8.4% of the vote in 1996.  More significant in the American electoral system, no third party candidate has won a state, nor received an electoral vote (discounting faithless electors), since George Wallace won five states and 46 electoral votes in 1968.  No non-Republican/Democratic party candidate has won the Presidential election since Zachary Taylor, a Whig, in 1848 (at that time the Whigs were a main party).

So, how can we promote the growth of more (and not just more, but smaller and more specific) parties in the American electoral system?  Well, for a start we can get rid of the electoral college and the system of Congressional Districts.  Yes, they were a good idea when the fastest method of communication was a horse and rider, but today these are simply outmoded concepts.  

We should hold national elections for parties to see what percentage of the public actually identifies with a party's platform, not their candidate.  Assign seats in the Congress based on the national results of this election, not individual state results (i.e. 45% Democrat, 30% Republican, and 25% Whig would yield 196 Democrats, 130 Republicans, and 109 Whigs in the House of Representatives and 45 Democrats, 30 Republicans, and 25 Whigs in the Senate).  This is called proportional representation (PR).  Distribution of the Representatives would still be based on population, and the individuals could still be elected by their respective state parties or local district parties, but this ensures that no gerrymandering to disenfranchise third parties could occur.  This might work, though not as well, on a state-by-state level.

Also, a problem that has grown in recent years, but was not even imagined in colonial days, is that of campaign finance.  I realise that I'm reprimanding my own party here, but I don't think that one party or candidate should be allowed to massively outspend the other.  Much as I love the things the Obama campaign and the Democratic Party have been able to achieve with the millions upon millions of fundraising dollars that they received this election cycle, I don't think it was very fair.  I think that it is every citizen's responsibility to find out as much as they can on a particular candidate; no party should have to plaster posters and fliers around and buy up as much television airtime as possible to inform a voter or scare them into voting one way or another.  If ignorant voters vote for someone out of ignorance, well then the country will be run by ignorant people.  You only get out as much as you put in.  But I digress.  

All candidates, in all races, should be publicly funded.  Either no outside funding should be permitted or it should be severely limited and regulated, as it is in France.  If you want to volunteer for a candidate, that's great, go out and do it.  Just make sure that everyone is on equal footing.  Historically this has been one of the biggest hurdles for third parties; they just can't afford to run campaigns.

On the fiscal side of things, though, funding ~15 parties through a primary season all the way to the election could get expensive for the government.  In that case, maybe some tax should be instituted to pay for campaigns, or maybe campaigns should be funded 50/50 publicly/privately if they receive a required amount of the vote.  NO MORE HUGE, LONG PRIMARY SEASONS!  Campaigning should begin maybe in April, at the earliest.  Not only would this save a heck of a lot of money, it would save us all the headache of having to watch political careers go down in flames as well as preventing long, drawn-out party infighting.

OK.  Well then, on to question two.  How do we deal with the election of an executive when no party receives an absolute majority of the vote (or something close to it)?

This is a difficult question.  In other developed democracies, Prime Ministers are appointed by the ruling party or coalition (a subject for another post) or a President is elected via a system of run-off elections which progressively eliminate candidates until only one is left (wouldn't that be a fun reality show?  Run-Off Terror!).  I think that perhaps the latter would be more appropriate for the United States, but then again, we also have a provision in the Constitution that deals with the possibility of no candidate receiving a majority of electoral votes that could be easily adapted to fit a system without the electoral college.

Article 2 of the United States Constitution on the election of the President and the breaking of ties:
if there be more than one who have such Majority, and have an equal Number of Votes, then the House of Representatives shall immediately chuse [sic] by Ballot one of them for President; and if no Person have a Majority, then from the five highest on the List the said House shall in like Manner chuse [sic] the President. But in chusing [sic] the President, the Votes shall be taken by States, the Representation from each State having one Vote; a quorum for this Purpose shall consist of a Member or Members from two-thirds of the States, and a Majority of all the States shall be necessary to a Choice. In every Case, after the Choice of the President, the Person having the greatest Number of Votes of the Electors shall be the Vice President. But if there should remain two or more who have equal Votes, the Senate shall choose from them by Ballot the Vice-President.

This method could work in a popular vote system, and I think, considering the existing infrastructure and tradition, it would be the most practical.

Parties might also form coalitions to achieve greater amounts of power.  Allowing large coalitions would be a mistake in a new elections system.  Coalitions are simply another name for "big, catch-all parties," just like "factions" were for "parties" in the 18th and 19th centuries.  If coalitions with large, moderate member bases and radical leadership get into power, they could do some nasty things.  I'm all for majority rule, but coalitions are a perversion of majority rule in that the few control the many, who in turn control the even many-er.  If that makes sense. 

As you may have noticed, quite a bit of my proposed electoral system comes courtesy of France.  France has a very good elections system, I think, especially with regard to campaign finance and advertising (there is no TV advertising, only government-regulated news coverage where everyone gets exactly the same amount of time, and no candidate can use the French colors of red, white, and blue in their campaign material).  Maybe we should learn something.  After all, our Constitution could stand an update, and the French have the most similar form of government (we did inspire their revolution, and they did make ours possible).

Well, that post was long enough, so I think I'll finish it here.  I have more post-election issues to discuss, but those can wait for another day.

Saturday, November 1, 2008

McCain on SNL

As a general rule, you don't make fun of yourself on national television until after you've lost an election.  John McCain seems not to have recieved that memo.

I'll make this short, because I want sleep, but McCain appearing on SNL tonight was a new low for American politics.  Isn't it kind of pathetic that McCain had to go to "the liberal media" (especially the hated NBC) to get some free publicity for his campaign three days before the election?  

That last bit, I think, is the most important.  If McCain had been on SNL a month or two before the election, then it might be different.  However, his appearance tonight, less than 72 hours before the polls close in the East, screams "desperate" to me.  I don't want to get too confident, but if I were his campaign advisor right now, I'd be calling a preacher to excorcise the stupid out of him.