Sunday, December 6, 2009

What to do with multiple OpenIDs?

The ability to have a single online identity is a worthwhile goal, in my opinion, and OpenID seems to be a major step in that direction. It allows login to any OpenID-enabled website through the use of your single identity, and tools like Verisign's Seatbelt make that even simpler. I'm quite pleased to see that more and more websites are adopting OpenID as a means of signing up or signing in.

The question I have now is: What do I do with multiple OpenID accounts?

Since more websites are using OpenID, more of them are also making their existing users able to use their existing logins as OpenIDs. This means that I already have at least 3 OpenIDs and no way to consolidate them.

This might seem like it has an easy solution - simply stop using two of them. The problem is that the one I've been using most already isn't the Google one, since Google's change to OpenID is more recent - and starting a new account on Google with my favoured OpenID would require too many transfers of data and notifications of third parties to be worth it.

So basically, I am looking for a way to use one of my OpenIDs to absorb the existing permissions and accounts of the others.

Monday, September 7, 2009

Discordian Poe

For those of you unfamiliar with Poe's Law, I recommend you take a look at this link, because I am about to turn it completely inside out.

Every religious, spiritual, and/or mystical belief has equal validity, including the belief that this equality is a joke.

Hooray for fundamentalist Discordianism!

Monday, August 31, 2009

Pure Intentions, Or Why I Dislike Networking

When I say "networking" as in the title of this post, I am talking primarily about the way people in the professional world make and use acquaintances as part of their jobs.

Almost every job on the planet benefits from networking - making personal connections with people who can help you out in your work. Almost every job needs at least some degree of networking to function, and it can be very difficult to get a job in the first place without knowing at least some of the right people - a resumé and a good interview are usually trumped by a personal recommendation. All of this makes very good psychological sense as well, since we are a social species who evolved in relatively small, insular communities (in comparison with the modern world, that is).

But I don't like it.
I idealistically think that the person who gets the job should be the person who can best do the job, not the person who is being recommended by someone closest to the employer. As far as I can tell most people agree with this sentiment, at least superficially.

I am not a very socially capable person. I accept that social situations are not my natural environment, and I have put significant effort into gaining what little skill I have at present. I am never going to be a politician, or a talent agent, or a sales representative.

There are many jobs for which it makes good sense to require a high degree of social skill, and there are many jobs whose content largely is networking itself. In these cases it would be ludicrous to attempt to remove networking requirements to obtaining the job.

Even in academia, my chosen direction, a certain degree of networking is sensible to have should one desire recognition or a position with administrative duties.

But networking itself, when not in a job which explicitly involves it, seems disingenuous and distasteful to me.

When making a new friend, most people won't immediately consider how the friendship could benefit their career - that's considered a form of using the person.
There is clearly a difference between making a friend so that they will be useful, and making a friend who later happens to be useful. Right?

The ways in which a person could be useful in furtherance of one's own goals are usually fairly simple and easy to see. As such, we don't even have to consciously recognize potential uses in order for them to be considered in our decision-making process. These considerations are likely to result in biases through which people make friends with people who are likely to be useful in future, without the moral penalties associated with doing this consciously.

Here's where the problem comes in: reasonable expectations of knowledge, even in retrospect, should apply to present considerations.

In more detail, when considering whether or not asking a favour of a friend "counts as using them" in an inappropriate sense, we should take into account what we would reasonably have known throughout the prior relationship, even if we don't have a particular memory of recognizing that knowledge.

For example, if I ask a friend of mine to help me with my computer, I should take into account the fact that I knew he was a software engineer when we first met, even though I didn't specifically think at the time of him helping solve my computer problems.

Seen carefully, this results in my preference for paying a stranger to do a job over having a friend do it for free.

Oddly, none of this presents any barriers to my doing things to help the careers of my friends, but this is simply another situation where I find it morally unpleasant to accept reciprocation, for the simple reason that it is morally distasteful to expect it.

Friday, June 5, 2009

Rights of the Individual vs. Rights of the Individual

It is a staple of science fiction to tell a tale in which artificial intelligences must struggle to attain equal rights with humans. It is also commonplace to recognize that, if a human being uploads their consciousness to a machine, the rights are carried with that consciousness.

People find it much easier to accept that an artificial intelligence deserves rights when that intelligence inhabits a body. This is, of course, a natural by-product of the way our brain interprets other humans as moral agents - our moral sense evolved to recognize other humans as deserving of rights.

When speculating about a consciousness being uploaded from its originating human body to a machine, we usually assume that either the body is then rendered effectively comatose (a shell or doll, so to speak) thus removing moral obligations to the body, or we assume that the consciousness is copied to the machine while also remaining in the body, creating two beings each deserving of moral consideration.

I would like to consider the former situation - suppose that a human consciousness is uploaded to a machine and the body then retains its neural ability to regulate breathing, heartbeat, and other unconscious brain functions, but does not keep memories or even acquired skills. Is the body then truly no longer deserving of moral consideration?

Shouldn't we then treat this body as a somewhat comatose individual? In this case, this "uninhabited" body may very well relearn motor skills and language as an infant would (alright, neuroplasticity in adults is much lower than in infants, so it could relearn these things as a developmentally challenged infant would), then it might proceed to develop another new personality. Should we dismiss this possibility outright and simply treat this body as a shell?

I think these ideas certainly merit consideration, but are highly unlikely to be resolved until such time as we actually develop some form of consciousness-uploading technology.

Additionally, the primary source for my thinking about these ideas is the series Dollhouse which, near the end of season 1, poses the question of whether or not the mind has an obligation to the body.

Clearly, of course, all this presupposes that mind and body are, in fact, separable in some meaningful sense, which I believe to be a reasonable idea but clearly not yet supported by evidence.

Wednesday, January 28, 2009

An Interesting Example of Foresight

I sat in the audience at a recent religious discussion panel and had the good fortune to share conversations with two of the panelists after it had finished. Both panelists (the Muslim and Christian panelists), when questioned about why they believed (or why I should believe) presented practically identical variations of the first cause argument.
While I think that the standard response to the first cause argument (where did God come from, then?) is all well and good, I prefer to rely on my own brand of confusing weirdness - a sort of mathematical argument that even though the universe has a finite age, there does not need to have been a first instant of time, and hence no first cause is needed. Additionally, I like to object to the prohibition on infinite causal chains and to the crucial separation of cause from effect in the early universe, but this time I focused on time as a spatial dimension and so (possibly) finite but unbounded.
The difficulty with this argument, as I said explicitly several times during these conversations, is that it dispatches nicely with the question of a first cause, but it proposes no answer to the question of "why is there something rather than nothing?" - the best formulation I have yet heard for the idea apparently being pondered.
I was surprised, however, that both panelists seemed extremely resistant to the question being phrased in this fashion. It makes sense for them to resist it, since "God" is a clearly unsatisfying answer to this phrasing, presumably being a "something" himself, but I find it distasteful to think that the panelists had seen this consequence of the phrasing and consciously resisted it. Instead, I am left with the puzzle whereby, if they were being honest and candid about their thoughts, why would they have refused such a clear restatement of the question at hand?

Friday, January 9, 2009

Role-Playing Games, Schizophrenia, and Skepticism

What is it that role-playing games, schizophrenia, and skepticism have in common?

Besides being demonized by Christians, they all involve the reality-testing circuitry in the brain.

Role-playing games, and I refer here primarily to the pen-and-paper variety, require a group of people to all take on the mindsets of fictional characters inhabiting the same imaginary landscape while keeping the fictional separate from the real. This is not difficult for very many people when it comes to dealing with their own character and eir environment, but it suffers occasional (if notorious) difficulties when it comes to separating the personality of another player from the personality of the corresponding character. In other words, sometimes in-game conflicts get accidentally translated into real life.
Experience playing RPGs develops the skill of compartmentalization while at the same time bringing it into the consciousness of the player. Almost everyone has at least some capability to compartmentalize, but one of the primary causes of inconsistent thought and behaviour in a person is when a person doesn't realize that two compartments of their thought are contradictory. Becoming more aware of compartmentalization enables people to better avoid these sorts of situations.

Schizophrenia can be characterized in part as a critical failure of the brain's reality-testing circuitry - its major symptoms include delusions and hallucinations - in particular this is true of the paranoid variety.
Everyone has thoughts (usually fleeting) considering the possibility that others are acting maliciously towards them, and everyone occasionally imagines what it might be like if certain pieces of their life were acting to directly oppose them, but someone with paranoid schizophrenia simply doesn't treat these thoughts as imagination. A schizophrenic can't simply dismiss these thoughts, sometimes just imagining the possibility of betrayal is enough for em to believe that it is true.

There is some speculation among neuropsychologists that even normal people actually believe whatever they imagine, but they are able to subsequently disbelieve with even a moment's consideration. If this is the case, it seems all the more relevant for skepticism and the scientific method to be included in any educational curriculum.

Skepticism is the consistent application of strict evidentiary criteria to any proposition. In addition to being a fundamental part of the scientific method, skepticism is practiced by almost everyone in a large variety of situations (if you think you are never skeptical, I've got some property in Alpha Centauri I'd like to sell you).
The primary realization of skeptical thinking is that the human brain's reality-testing circuitry is not very accurate in most situations. Once you have had this realization, it is simply a matter of questioning how your innate reality-testing can be supplemented enough to meet the demands of modern life.

Really, the only negative consequence of applying too much skeptical thinking to a question is that it can waste time and energy. For example, is the table in front of you real or imagined? Your innate reality-testing software says that if you can see it and touch it, it's real. Of course our senses can be fooled pretty easily, so we can go to great skeptical lengths to determine whether or not that table is real. We can go through months of scientific testing, double-blind trials (of some kind) and painful statistical analysis to answer the question, but all this analysis is even more likely to give us the correct answer than the simple and obvious test, wasting a whole lot of time and energy in the process. The difficulty in thinking skeptically in practice is developing guidelines for determining which situations are worth leaving to your brain along and which are worth investigating in detail.

There is an additional point of contact between skepticism and role-playing games, specifically fantasy-based RPGs. In a role-playing game you can imagine what a world which involved magic might actually be like. The more experience you have imagining what life in a magical world might be like, the easier it is to recognize the faults of magical thinking in the real world.