Wednesday, April 1, 2009

R. v. Morris - Evidence and Formal Logic

“A case is only an authority for what it actually decides. I entirely deny that it can be quoted for a proposition that may seem to follow logically from it. Such a mode of reasoning assumes that the law is necessarily a logical code, whereas every lawyer must acknowledge that the law is not always logical at all”, Quinn v. Leatham, [1901] A.C. 495, from the speech of Lord Halsbury, at 506

This past fall, we studied R. v. Morris, [1983] 2 S.C.R. 190 in Evidence. It was an interesting read - the majority and minority judgments each turn on neat little bits of formal logic.

The crux of the case (at the Supreme Court) was the admissibility of a newspaper clipping. Is a newspaper clipping about the heroin trade in Afghanistan relevant to prosecuting a conspiracy to import heroin from Hong Kong?

In the dissent, Lamer J. (as he then was) argues that the clipping was inadmissible, since it was more prejudicial than probative. The central argument is at p. 204:

Its sole relevancy is through proof of the accused's disposition, the reasoning being as follows: that, because persons who are traffickers are more likely to keep such information than not, people who keep such information are more likely to be traffickers than people who do not, and that a person who traffics is more likely to have committed the alleged offence than a person who does not. The ultimate purpose of placing the accused in the first category (people who keep such information for future reference) is to put him in a category of people the character of which indicates a propensity to commit the offences of which he was charged.

Lamer is saying that the clipping's only use is the impermissible inference - that because Morris had this interest, Morris is more likely to be a drug trafficker. In terms of formal logic, Morris was in the set of persons possessing clippings about the heroin trade. The set of persons having clippings about the heroin trade (at the very least) intersects with the set of persons interested in the heroin trade. The set of persons interested in the heroin trade is a superset of the set of persons involved in the heroin trade.

By my reading of his reasons, Lamer was saying that because this connection is so tenuous, the clipping is inadmissible - since it has so little value in proving anything but disposition.

By the definition of relevance in Paciocco and Stuesser's Law of Evidence, (4th ed), anything that, as a matter of logic and human experience is probative of a material issue is relevant. The impermissible inference is generally presented as an exception - that general character or likelihood to offend is not sufficiently strongly related to the specific crime an individual is charged with. However, interest in the Afghani heroin trade might have some circumstantial relevance to involvement in the heroin trade through Hong Kong.

Unfortunately, McIntyre J. is a great deal more subtle about the logical basis for allowing this evidence than Lamer is about the logical basis for excluding it. Fundamentally, McIntyre says that the clipping should be given very little or no weight, but that it is logically relevant - as well as a strong connection between him and other traffickers, he was informed about the heroin trade.

Although you may not be able to deduce Morris' participation in the heroin trade from the existence of the clipping, it is relevant - it places him in the set of persons who are informed about the heroin trade. Each piece of evidence provides a selection criterion for determining membership in a set of possible offenders. If the intersection of all these evidence-sets has one member, that should then be proof beyond a reasonable doubt of that member's guilt. All information that provides a valid selection function for a set of suspects is then relevant. The clipping then shows that Morris is part of that set of persons with some interest in the heroin trade. While this may not provide a significant narrowing of the set of suspects, it does place Morris in a particular set. Consequently, it's relevant - but given how unlikely it is to allow a decision between two possible suspects, it must be accorded very little weight indeed.

All that being said, I'm not sure that Lamer was wrong - if it is of little or no weight, does the minuscule probative value outweigh diminutive prejudicial effect? Is that prejudicial effect made worse by Morris' association with those with a stronger tie?

Tuesday, November 4, 2008

Law Papers from a Computer Science Perspective

I'm not good at writing before I know almost exactly what I'm going to say. This has not been a good thing so far in law.

In computer science this was not a problem. I'm more comfortable with tools for planning code than I am with planning written work. I can organize my thoughts effectively, moving from initial requirements, to pseudocode, to UML, to actual code. When I'm writing papers, however, I start from *lost* and finish somewhere close to *disoriented*.

In a paper for my first-year Constitutional Law course, I forgot to include my central argument - that the private copying levy was necessarily incidental to Canada's copyright regime. I doubt I would have made this mistake in code.

Beyond planning, I can test code to make sure it works. If a program doesn't work, I can *generally* tell by running test cases. I can figure out how a program is broken by how it fails to perform. I cannot do the same with law papers - a program missing its central function doesn't tend to do much other than halt, melt, and catch fire. A paper is much less spectacular.

That's why I decided that I should start blogging - because I don't write enough to be comfortable putting my non-code thoughts to paper (or bytes). I need to practice, and I'd rather practice in a forum where someone might catch my screw-ups.

Saturday, June 21, 2008

Be Careful...

About consumption of nice Australian Shiraz and bad movies. You might enjoy Top Gun much more than you should.

Friday, April 18, 2008

Final exams: 0.4 finished

This is going to be a long weekend of studying for Constitutional and Property Law - but I've finished two of five exams.  Hopefully, that means I'm two exams closer to not failing out of law.

Sunday, April 13, 2008


No time for writing - exams eating my brain.

Tuesday, April 8, 2008


This really isn't related to law, but I couldn't help but laugh when I read it.

This CNN story has a great quote from Hillary Clinton, regarding calls for her to drop out of the race for the democratic nomination. The quote:

"Let me tell you something. When it comes to finishing the fight, Rocky and I have a lot in common. I never quit."

Anyone else remember Rocky losing on a split decision?

Wednesday, March 19, 2008

Genetic Algorithms and Patents

Grabbed this story from here, and thought it might be interesting to expand upon my comment, in response to the first comment. (here's the comment thread)

For the chronically lazy, the issue is this: should a design generated by a computer's trial and error be patentable, the same way that human trial and error can produce patentable inventions?

There are a host of arguments against this, but none of them seem terribly rational to me.

The article cited above gives a number of them, and the first comment gives another. Briefly:

  1. The reasoning behind a design created by an evolutionary algorithm may be incomprehensible to its human inventor, and patents are worthless if the person applying for the patent cannot defend or justify it
  2. Disbelief that a self-organizing process can produce better systems than top-down, intelligent design
  3. Not knowing how computer-created designs work means that they're more difficult to analyze, to look for possible failure
  4. And from James Dyson: "Evolutionary algorithms will mean the end of those exciting stories about how people made great inventions by accident... [h]uman ingenuity and intuition should remain crucial in making a success of any product."
I'm sure there are other objections, but these strawmen are enough for now.

Argument 1 seems a little strange. From Chapter 16, page 4 of the Canadian Manual of Patent Office Practice:
"[T]o be considered inventive, the combination must lead to a new unitary result that is different from the sum of the results of the elements; there must be some cooperation or interaction between the elements that produces some unexpected advantage, result, or use." [emphasis added]
In this situation, while the person attempting to prosecute the patent might not immediately understand why the process produces something novel, that seems to argue for the novelty of what the program has created (especially given the post-KSR obviousness standard in the States). Also, I'm not sure if it's necessarily true that the person filing the patent wouldn't be able to see how what they're doing is different - solutions have a habit of being horribly obvious in hindsight.

So many ridiculous patents fail to meet the standard for novelty - they exhibit no unanticipated advantage beyond what a PHOSITA would see from the combination of pieces. Given that many truly revolutionary inventions come about through trial and error (often through trials relating to something else), why is it so wrong to have procedural generation of a novel solution to a problem? Really, this seems to be simply an iterative, simulated solution to this problem, instead of the normal iterative, physical, cost- and time-intensive solution.

Argument 2: The human version of intelligent design doesn't work for complex systems. It may give you a good initial rough draft - but a lot of the actual work is bug-fixes. What initially looks like an elegant solution may end up being horribly unwieldy in practice. Systems evolve as they are built - but not always elegantly, or in a direction that improves them. Evolutionary algorithms aren't that different from human design, taken as a whole.

Argument 3: This may be somewhat more difficult to refute - it MAY be more difficult to analyze the output of an evolutive process than it is to analyze a design where the reasoning is known. I suspect that the way you would actually do it is rather similar - first, look to see what you can find that's immediately wrong, and confirm that it is. Then, once you've realized that half of it was right, fix it, and start testing... so that you can find the 90% of bugs that you've missed. (Software has made me cynical)

Argument 4: James Dyson should get off his high horse. Human ingenuity IS involved - coming up with evolutionary algorithms is ridiculously difficult at this point. People also come up with the seeds for the evolutionary process. People verify that the output works in the real world (and yell at the simulation writer when it doesn't).

This does not replace human ingenuity. This is human ingenuity making itself more powerful.