Thursday, January 31, 2008

Arc Wars

Ok, this is going to be a short one, but it has to be said.

Arc (the new Lisp dialect from Paul Graham) came out this week, and the respone has been HUGE. Not in that lots of people are spending lots of time building cool application prototypes with it, but that lots of people are spending lots of time in flame-wars over whether Arc is cool or not. Seriously. There is a huge conflict emerging between one camp who is carrying a banner that says "Arc Sucks!", and another group across at the other street corner carying signs that say "YOU suck because you think that Arc Sucks!".

Now, I'm going to preface this with a disclaimer: I have not downloaded and tried out Arc yet. I plan on doing it this weekend, but right now I'm speaking in ignorance. Ok, now that we got that out of the way: THIS WHOLE CONFLICT IS A WASTE OF TIME AND EMOTIONAL ENERGY!

Here are my thoughts on the subject in tabular format:

Evaluating a new language and deciding you LIKE it: Reasonable.

Calling anyone who DOESN'T like the new language a fool and an all around bad person: Unreasonable.

Evaluating a new language and deciding you DON'T like it: Reasonable.

Lambasting the developer of the language mercilessly as though he killed your dog: Unreasonable.

Suggesting a new feature to a developer of an open-source language: Reasonable.

Calling the developer a hack and a failure for not including the feature you wanted in release one: Unreasonable.

Writing a commentary about a new open-source language that points out some of the good things about it and looks positively towards it's future: Reasonable.

Writing a commentary about a new open-source language that religiously defends a work-in-progress as though it is the Ark of the Freaking Covenant: Unreasonable.

I think you get the picture. As a collection of smart people, I think hackers have a responsibility to be a little more rational than the last few days have shown us to be.

Tuesday, January 29, 2008

Hacker Management

Working as the immediate middle-manager over a group of tech-nerds is probably not a very fun job. I don't know for sure, I haven't done it myself, but my impression is that it's no walk in the park. After all, if you have n reasonably adept programmers working under you, than you probably are dealing with n^2 ego problems and it's likely that every one of your employee's believes that he is smarter than you (and half of them are right). So I'll give credit to the guys who have that responsibility for doing a job that is guaranteed to be difficult.

That being said, there are a few things that I feel like only the great hacker-wranglers really have grasped.

1) Hackers are arrogant

There's no tip-toeing around this. Almost every hacker I've ever known has thought very highly of himself. You might not always know it from conversation; hackers are usually very smart as well, so many of them have cultivated "humble" personalities or are very friendly and magnanimous. It doesn't matter, they still probably think they are the best thing to happen to programming.

I don't say this to be mean, I fall into this category myself. I say it because as an manager of programmers it's important to realize that any political posturing or intellectual competition is a waste of time. It is useless to try and prove that you still have that technical edge that you had back when you were a developer. It is painful and wasteful to arrange situations that inconvenience them in order to make sure they know who's boss. Even if you are in the right, you will not prove anything to them. Far better to let them think they're all geniuses and channel their energy in a positive direction. Believe me, you'll save yourself plenty of headaches this way. Ultimately you're only adding value to the equation if programmers are more productive with you at the helm than as a coding-commune, so tell them that they are great, and then give them great things to do (and hold them to it; nothing motivates a hacker like a threat to his reputation).

2) Hackers are expensive

You're company is spending a lot of money daily to have smart people around to do hard things for you, and you'll pay them the same amount of money whether they are writing code or doing nothing (ie. attending meetings, formatting spreadsheet, preparing reports, organizing office parties, chaperoning new employees, etc)]. What do you think is more productive in the long run? You don't use bazookas to kill mice, and you shouldn't use hackers to answer phones (unless it's tech support; every hacker should have to support the product they built in order to get an appreciation for how much better it would have been if they had written their application right the first time).

3) Hackers are finite

There are only so many good hackers available for hire, and there are lots of people who want good hackers on staff. People might read that sentence and think that I'm complaining about money or benefits, but that's not the case (although I've never turned down a raise). I think there's one thing that will keep a hacker interested in coming back to his job every day, and that's a challenging problem to work on. Not impossible, mind you; managers who try to death-march their programmers into the pixel-mines are doomed from the start. But an intellectually stimulating project is equal to a 25% pay increase in my book. Give your code-warriors a tough nut to crack, and you will have a happy family.

Now, what if your company sells paper and you just have a few cruddy internal apps for the IT department to maintain? Well, I'll admit, that's a problem; you probably won't retain star talent. However, I think the key in a situation like this is to give your developers a little freedom to experiment. A good hacker is always looking for ways to optimize things, and he might find a side project that make his life (or other peoples lives) easier. If he does, let him loose on it. I know, I know, there are other important things that need to be worked on, and I'm not saying you should ignore those items, but I can give you a nearly 100% guarantee that as soon as you put a lid on a hackers good ideas, you will be seeing a goodbye letter within a few months.

Monday, January 28, 2008

Borrowing at 100% Interest

"Hey, can I talk to you for a minute?"

I hear this phrase (or some variation thereof) probably 15-25 times in any given day. That's not necessarily a bad thing, in and of itself. I've heard it said that any given problem that comes up during the course of a software project can ultimately be reduced to somebody not knowing some critical piece of information at the right time. In light of that, spreading around the knowledge is good, and pretty much every post-mortem I've ever been to has had "Communication" under the list of "things we could have used more of".

At the same time, that "minute" in which I'm talking to someone else instead of coding has an opportunity cost. You may have cleverly spotted it already and you might be saying, "of course, that's 1 minute that you aren't coding, so the cost is 1 minute of productivity". Well, that's the OBVIOUS part. The real cost is still behind the curtain.

Talk to any hacker about what it feels like to be in "the Zone", and he'll know what you mean. You're super-productive, you can see the path for your code clearly in front of you, there's plenty of things you're logging away for later to tweak or simplify, and you feel great. As Paul Graham says, you have the code loaded into your head. When someone comes along and interrupts the flow, it can take some time after they've left to re-orient yourself, and although the formula isn't exact, I'm prepared to hypothesize based on my own experience that it takes about as much time as I spent out of the code to get back into it. 30 seconds of talk means 30 seconds of finding my place again. 20 minutes of chit-chat, and it's probably going to be 20 minutes of tabbing and browsing until I hit my stride again.

So what's the real cost of an interruption? Let's break it down:

If I'm being talked to in the hallway away from my desk, it will take me 5 more minutes to return to my desk than it would have otherwise. Cost of the discussion?

5 minutes.

If I'm interrupted while coding, then there's the 5 minutes I spend in conversation, and the 5 minutes it takes to get myself back in the game. Total cost?

10 minutes.

Ergo, the cost of an interruption in terms of productivity is a 100% interest charge. Extrapolate that out over a MINIMUM days worth of interruptions (15) averaging 5 minutes a pop:

5 minutes * 15 occurances = 75 minutes
+ 100% interest = 75 minutes
150 minutes
= 2.5 hours of lost productivity

Ain't math a bitch? 2.5 hours is 31.25% of my 8 hour workday, and that isn't counting the time lost to meetings and other corporate culture byproducts.

Now we all know that time is money, so let's put a dollar sign in front of these figures.

Assume that a software developer earns $50/hr. 2.5 hrs is then transformed into $125 daily of lost time. Over the course of a month (22 workdays) that turns into $2,750, and if you take that multiplied by twelve then you will discover that the company this software developer works for is losing $33,000 annually just because people are distracting him for a few minutes here and there.

Can you believe that? Well here's a figure that's even better. Joel from repeatedly has trumpeted the value of private offices for developers, and if our developer above had one that had a door on it I bet he could at least have all the non-emergency discussions take place during times when he's out in the hallways taking a break anyway. That would mean that he wouldn't incur the cost of diving back into the code after each interruption, and this would result in a $16,500 increase in productivity annually. Wow.

So, would you ever take out a loan if you were going to have to pay it back double? Probably not, unless it was a real emergency, and then it would probably be worth it, but under most cases you would do what you had to in order to minimize the impact of the situation on your financial wellbeing. Well, the same principle applies to those diligent engineers hacking away in your IT department. Interrupt them if you must, but be aware that the time you use is being borrowed at 100% interest.

Sunday, January 27, 2008

Lisp-ing, Part 5: console I/O

This is Part 5 of my tutorial series on lisp.
<<PART 4 PART 6>>

In the Lisp interpreter, anytime you call a program the return value of the top-level function will be printed on the console. For playing around with testing new functions and stuff, that's all you'll really need, but one of the basic requirements of most programs sooner or later is the ability to read input from a user at the console and to print prompts to the console from a location in the code that is not necessarily the top-level function. For this purpose, there are a couple functions built into Common Lisp that provide you the ability to do simple I/O with the console.

Formatted Output

If you are a user of something like C#, you will probably recognize the following statement:

console.Write("Hello, {0}. \n The Time is {1}", "Stan","1:46");

This is a common way for doing console output, passing a string as a parameter with a couple characters as place holders for arguments to be passed in afterwards (usually these arguments would be derived from another function or calculation, but I hardcoded them for simplicity's sake). Special escape characters like "\n" are used to signify things like new-lines. Lisp output works in much the same way when using the "format" function:

> (format t "Hello, ~A.~% The Time is ~A" "Stan" "1:46")
Hello, Stan.
The Time is 1:46

So first we have the "format" function call, much like any other lisp function. "format" takes a variable number of parameters, but must have a minimum of 2. The first one, "t", indicates where the output will be going to. "t" is used when you want it to go to the default location (in our case, the console). Next is the string literal to be formatted, with any place-holders marked with the "~A" character rather than the more common {0}, {1}, {2} sequence. Also, a newline is indicated with the "~%" character sequence, rather than the more standard "\n". Then you just have to add another parameter for every "~A" you included in the format string, and you're set. As mentioned above, those parameters used to fill in the "~A" place holders are usually a result of a function call or something, not just hardcoded strings.

Reading console input

Lisp makes the collection of input from the console line incredibly intuitive. How intuitive, you ask? Check it out:

> (read)

That's it. The read function will halt the program until it recieves user input, at which point it will return that input as it's return value. This makes interactive functions very easy to write. Here's a simple example:

(defun greet nil
(format t "What is your name, user?~%")
(format t "Hello, ~A" (read)))

This simple function prints a prompt and then waits for input with the "read" function being used to collect input to feed to the "format" function on the third line. Simple and clean.

Friday, January 25, 2008

Worth less when invested in.

Well, I've arrived. I've had nine posts in the blogosphere, and two companies have now contacted me asking me to blog positively about their products in exchange for cash. What a deal, right?

No. I rejected both offers and would do so again in a heart-beat regardless of the amount of money on the table.

First of all, I can't believe any company would be daft enough to give money to somebody who probably has a total of 10 readers so far, most of whom probably agree with his opinions anyway. Second, I just wouldn't be able to do it, it would leave a bad taste in my mouth.

Some people are probably wondering why that is. After all, if you're going to blog anyway, and you're going to write about certain products you like using because they help you, why not collect a little payout on the side? Let's do a little thought experiment to see why.

You and I are good friends. You've been single for some time and are looking for a date, so I start telling you about this girl I know who would be great for you. She matches your personality, she likes a lot of the things that you do, she's even physically attractive. You think this sounds great and ask me to set you up on a date.

Two days later, you come back to me saying that you had a GREAT time on your date with this girl and that you've already scheduled another one for next weekend. Good for you! But then, in the course of our conversation, you find out that the girl you went out with actually gave me money before I recommended her to you. How does that feel?

Does it matter how MUCH money it was? Does it make a difference that you legitimately enjoyed the date you had with her? What if I told you that I sincerely thought she was a good match for you anyway, I just took the money because she was offering but it hadn't really changed my opinion about her?

None of that would make a difference, you still would be an unhappy person. Why? Because I represented myself to you as your friend and that implies that my primary interest in a recommendation of any sort is your well-being. As soon as there is cash in the equation, even if it really doesn't change my opinion at all, it's still a conflict of interest and it means that I leveraged the trust that you had in me when I had a potential ulterior motive.

That is why a blogger cannot take cash for posts. That's not a blog, that's marketing. A blog is sharing my opinion with others because I have found certain ideas or technologies useful and I want them to benefit from it. If I take cash, you shouldn't trust my opinion anymore because I have misrepresented our relationship. An opinion is one of the only things I can think of that becomes worth significantly less as soon as someone else has invested in it.

Thursday, January 24, 2008

Zetassociates Web Presence Launched!

Ladies and Gentlemen, we are now on the Web! Zetassociates has been in existance for almost a year now providing custom software consulting and training services, but we JUST NOW got our very basic and early-version web site up and running! Check us out at

Zetassociates has some other members that have presence in the blogosphere as well. Steve Asher (to whom special thanks is due for putting the website together) can be found as Build Without Boundaries on blogspot. Travis Heseman moonlights as Code Ronin, and I, of course, am here.

So if you have a problem in your business that is in anyway related to software, now you know who to call.

Wednesday, January 23, 2008

Lisp-ing, Part 4: Functions

This is Part 4 of my tutorial series on lisp.
<<PART 3 PART 5>>

Lisp is a language that is usually associated with the "Functional Programming" paradigm. For those of you who aren't familiar with that term, functional programming takes a problem and models it as mathmatical functions for evaluation. This is in contrast to imperative programming where "state" is the focus, and the outcome of a program is modeled in changes of state.

As a primarily-functional programming language, the most important building block of a Lisp program is the function, so here we're going to talk about the specifics of defining new functions and using them.

Defining a function

If you're used to using one of the more common enterprise languages like Java or C#.NET, the following probably looks pretty familiar (example code will be written in java):

public class MyMath
public static int AddTogether(int oneNumber,int otherNumber)
return oneNumber + otherNumber;

This is a simple function for adding two numbers together. Now here's the same function, but written in Lisp:

(defun addTogether (oneNumber otherNumber)
(+ oneNumber otherNumber))

that's a little smaller, isn't it? Let's examine some of the differences. First, the java example has this "public static" at the beginning. In an imperative language, these are important specifiers. "public" says that any code can use that function, both code within the MyMath class structure and code elsewhere. Other modifiers like "protected" and "private" would restrict access to that function to a subset of code (child classes, and only the defining class, respectively). "static" means that this is a "class-level" function, or in other words, you don't need to have an instance of the MyMath class already constructed to use this function. you can call it without instantiating any objects. If "static" were omitted, you would have to first instantiate a "MyMath" object, and then make the function call.

So how is this information conveyed in Lisp? Simple: it doesn't need it. Access modifiers like "public" have to do with whether code outside the class can use the function or not. Functional programming doesn't use classes, the AddTogether function is just a piece of code that adds two numbers together and returns the result, and once it has been loaded into the runtime it can be called from any other piece of code. The same goes for "static": there aren't any "objects" or "classes" in functional programming, so there's no need to specify whether the function belongs to the class or to instances of the class; it just exists.

Next in the java example is the return type: int. This tells other code to expect an integer as the return value when that function is called. Once again, Lisp forgoes this information. Not only does it not declare a return type, there is no "return" statement at all. In lisp the return value of any function is simply the result of the last expression called in the body of the function. In our case, the last expression is (+ x y) which calls the "+" function with the parameters "x" and "y", so that is what is returned as the result of the function. (you might ask why we've defined a function specifically to add numbers together, which the "+" function already does. The answer? It's easier to focus on the structure of a function definition when you are looking at a trivial example. Don't worry, I'm aware I haven't invented anything new.) ;-)

calling a function

So what about using those functions? Well, our java example would look something like this:


In lisp (using interactive mode) the same thing would look like this:

> (AddTogether 5 10)

As you can see, when you're calling a function in Lisp, you don't write the method name and then a parenthesized, comma-sepearted list of arguments. The function call and the arguments together make up a list (just like everything else).

Alright, so if you skipped the entire article just to see if there was anything interesting at the bottom, here's what you missed:

1. You define a function in Lisp using this syntax:
(defun [function-name] ([parameter list]) ([body]))
2. You call a function in Lisp as follows:
([function name] [argument 1] [argument 2] ...)

pretty basic. Now, using your new found lisp-reading skills (and a few tricks from the previous lisp tutorials), figure out what the following function does:

(defun UNK (num lst)
(if (zerop num)
(car lst)
(UNK (- num 1) (cdr lst))))

And what this function call will print out:

> (UNK 2 '(A B C D E))

Tuesday, January 22, 2008

Remote NHibernate: 5 Lessons Learned

Distributed systems are a common need for big companies, but there are some pitfalls associated with their construction that it is often hard to find help for online. One of the big problems I ran into recently was with a distributed app I'm building for the corporation I currently have a contract with (MEM), and after spending many long hours wrestling with it, I want to post some of my findings to help anyone else who might run across similar issues. All salient points are bolded as LESSONS LEARNED

The Problem

The application is an admin tool for examining and managing one of the batch systems that runs nightly. The application must:

1) Be a desktop application (style preference by the users).
2) Only connect to the database (Oracle 10g) through an application server.
3) Operate within a "reasonable" range of performance parameters.

1 and 2 taken together meant it was for sure going to be a distributed application.

I wanted to use an ORM to avoid having to write all the SQL code for the large object model, so I chose to use NHibernate as my persistance layer.

Figuring this was a simple problem to solve, I set off to get this sucker up and running.

The incremental solution

(All example code is C#.NET)

Ok. First things first, we already have a Service interface built for data access that all of our applications use (that way data access can be changed easily from DB4O to Oracle or whatever without affecting the application code). A simplified version just for querying data would look like this (there are lots of operations the real data service takes care of [Update,Delete,predicated queries,refreshing,commit,rollback, etc], but they would clutter up the example):

public interface IDataService
IList<T> Query<T>();

So priority one is to build an implementation of this service that uses NHibernate to talk to the database. Once I've proved that's doable, then I can worry about having the server run that code and getting the result to the client. Here is what I came up with.

public class OracleNHibernateDataService : IDataService
private readonly Configuration config;
private readonly ISessionFactory sessionFactory;
private readonly ISession session;

public OracleNHibernateDataService(): base()
config = new Configuration()
"Data Source=DVLP_DATABASE;User Id=/;")

sessionFactory = config.BuildSessionFactory();
session = sessionFactory.OpenSession();

public IList<T> Query<T>()
return session.CreateCriteria
catch (Exception ex)
throw new Exception("Query failed!", ex);

This worked well enough for a couple tests, and I began to construct the application using this service as my data layer, not realizing just what a painful journey I was about to embark upon.

NHibernate session management
First I did some research to make sure everything I had done so far conformed with NHibernate best practices. It turns out that I had overlooked one issue that was potentially critical: I was maintaining an open NHibernate session indefinately as long as the application was running. That session (according to the forums I was reading) was holding open a database connection, which is a rare and valuable resource. In order to minimize the time I was consuming database resources, I disconnected the session as soon as I created it (session.disconnect()) and modified the Query method as follows (changes in bold):

public IList Query()
return session.CreateCriteria(typeof(T)).List();
catch (Exception ex)
throw new Exception("Query failed!", ex);


Ok, first problem solved. The database was now being used fairly efficiently. LESSON 1: ONLY KEEP OPEN THE NHIBERNATE SESSION AS LONG AS ABSOLUTELY NECESSARY Now I began building the UI and the intermediate layers in earnest, and before long I had hit the build button and was anxiously awaiting my first look at an active UI representing the data...

Lazy Initialization

...which never appeared. Yes, it turns out the data being marshalled from the database had so many relationships that it was taking NHibernate about 5 minutes to put it all together. Unacceptable. Especially since the user was likely to only use about 5% of the information being loaded. I was prepared for this eventuality, though, since I had read that NHibernate supported Lazy Initialization. Basically, if you query for an object and have specified in the mapping file for that object that some of it's members should be loaded lazily (lazy="true"), than only the members without that attribute (or members with lazy="false") will be loaded into the object. All the lazy data is not retrieved from the database until you call the getter for that object, at which point it will make the database call and initialize the member "just in time". That way my application could load only enough data for the user to decide where to navigate next, and continuously load objects deeper in the graph as the user requests that information. Problem solved, right?

So I fire up the app once again and am promptly greeted with a LazyInitializationException - collection could not be loaded lazily - no session. Yes, that's exactly what it sounds like. Because the NHibernate session has been disconnected, I cannot load data on demand. So here we have a catch-22 of sorts. One the one hand, if I want to have good performance and only load the data necessary at the time, I need to leave the session open. On the other, if I want to conserve database resources and not leave a connection open indefinately, I need to close the session when I'm not using it. LESSON 2: IN NHIBERNATE, PURE LAZY INITIALIZATION AND SESSION MANAGEMENT ARE A TRADE OFF.

I considered giving up at this point and resorting to standard ADO, but I had invested time into writing those mapping files and I'd rather maintain those than write PL/SQL procedures anyday. So what's the solution?

Lazy Activation

I can't load the data ahead of time, there's just too much of it, but there's no way the sysadmins are going to let me leave a database connection open and idle on the server. So, I decide to introduce a manual compromise in the form of an "Activate" method on the data service:

public interface IDataService
IList Query();
void Activate(object obj,
params string[] propertyNames);


And I add the following methods to my NHibernate Implementation:

public class OracleNHibernateDataService : IDataService

public void Activate(object obj,
params string[] propertyNames)
ActivateProperties(obj, propertyNames);
catch (Exception ex)
throw new Exception("Activation Failed!", ex);

private void ActivateProperties(object obj,
string[] propertyNames)
foreach (string propName in propertyNames)
ActivateProperty(obj, propName);

private void ActivateProperty(object obj,
string propName)
if (!propName.Contains("."))
GetActivatedValue(obj, propName);
string[] chain = propName.Split('.');
object value = GetActivatedValue(obj, chain[0]);
propName.Substring(propName.IndexOf('.') + 1));

private object GetActivatedValue(object obj,
string propName)
PropertyInfo property = obj.GetType().GetProperty(propName);
object value = property.GetValue(obj, null);
string activate = value.ToString();
return value;


Verbose, yes. Certainly more work than I wanted to do. But now when the user asks for more detail about an item, I can pass it to the dataservice, which will load all the properties specified (recursively, in the case of chained property list) and the object will be good to go. Additionally, when I run the progam this time everything behaves as expected. Cool, time to write the server code.

Now, ideally, I don't want the application to even know that it's using a server. It should just be asking the DataService for information and get back what it wants. So what I need is another implementation of IDataService. A proxy, if you will, that takes care of calling the server and asking for data. I also don't want the proxy to care how the server is getting data (whether it be from oracle or DB4O or whatever) so the server should expose itself in a ways that also conforms to the dataservice so I can just chain a call from the application to the proxy to the server to the already written OracleNHibernateDataService.

I could show you all the code that goes along with that, but that would be more of a novel than a blog. Basically, I used .NET remoting and exposed an implementation of IDataService as a remote object, which just passed any method calls through to the OracleNHibernateDataService (using lock() so that multiple calls to the server from different clients don't collide). Long story short, there's a new problem. Activation is no longer working. Why?

Remote activation

Obviously, the activate method on OracleNHibernateDataService just opens the session and calls any properties that have been passed to it to force them to initialize. That's great when you're working with the instance of that object on your local machine, but if pass an object over the network to a server, which activates those properties, the instance running on the client machine is unchanged (boy I should have seen that one coming. It's almost embarassing). LESSON 3: YOU CANNOT MANIPULATE THE STATE OF AN OBJECT ON THE SERVER AND EXPECT ANY CLIENT OBJECTS THAT ARE REPRESENTING THE SAME DATA TO AUTOMAGICALLY RECIEVE THOSE CHANGES.

So we're back into the IDataService interface, now changing it so that the method actually returns the object that was activated (meaning the client will have to make sure to replace any references to the object):

public interface IDataService
IList Query();
object Activate(object obj,
params string[] propertyNames);

We also will have to modify the Activate method in the implementation to return the object as the interface specifies:

public class OracleNHibernateDataService : IDataService
public object Activate(object obj,
params string[] propertyNames)
ActivateProperties(obj, propertyNames);
catch (Exception ex)
throw new Exception("Activation Failed!", ex);
return obj;

Alright, another problem bites the dust. Surely now we have a functioning remote application.

Session State

WRONG! Of course we don't! See the line in the OracleNHibernateDataService where the "Lock" method is called on the session with the object as the parameter? Well that method associates a persistant object with a session when it was not retrieved from that session. Unfortunately, when that object has been serialized and deserialized, apparently the session doesn't recognize the object as an NHibernate friend. This means that when it tries to execute "Lock" on an object (that was sent over the network) that has any lazy-loaded members (which are ALL the objects in my app), even though the session is open, it will throw a LazyInitializationException. Yes, very frustrating. What was my fix? See the next iteration of OracleNHibernateDataService below:

public class OracleNHibernateDataService : IDataService
public object Activate(object obj, params string[] propertyNames)
if (obj is IPersistable)
obj = session.Get(obj.GetType(),
throw new Exception(
"Hibernate persisted objects
must implement IPersistable!");

ActivateProperties(obj, propertyNames);
catch (Exception ex)
throw new Exception("Activation Failed!", ex);
return obj;

Yes, I created a new interface called IPersistable that all my domain objects implement, and the one property it declares (Id) returns the unique sequence number that identifies that object in the database. So, the object is actually reloaded and then returned across the wire. LESSON 4: NHIBERNATE LAZY INITIALIZATION APPARENTLY DOESN'T WORK ON AN OBJECT THAT HAS BEEN SERIALIZED AND DESERIALIZED

Are we done yet?

Server Security

No, not quite. One last problem that kept me busy for HOURS. Remember that the OracleNHibernateDataService uses reflection to run activation? Well, it turns out that if a request that comes from outside the local machine (Like a remoting call from a client) kicks off the code that runs reflection, that will cause a security exception to be thrown and returned to the client. Reflection is a permission granted only to local code unless otherwise configured. In order to get my server to not bomb every time it was asked to activate something, I had to add the following lines to the server startup routine:

BinaryServerFormatterSinkProvider sfsp
= new BinaryServerFormatterSinkProvider();

= TypeFilterLevel.Full;

IDictionary props = new Hashtable();
= int.Parse(ConfigurationManager.AppSettings["ServerPort"]);

TcpChannel channel
= new TcpChannel(props, null,sfsp);
ChannelServices.RegisterChannel(channel, false);


So now you know! Next time you want to work with an ORM tool over a network....Don't! Just kidding (mostly). With these lessons in hand I will be much more effective in instrumenting this kind of solution in the future.

Sunday, January 20, 2008

Lisp-ing, Part 3: Predicates and Conditionals

This is Part 3 of my tutorial series on lisp.
<<PART 2 PART 4>>

Most programs cannot be written without branching at some point. It's all well and good to be able to execute various statements, but it doesn't really become useful until the program can decide to take a certain action based on some evaluation of a condition (ie. if it's below 40 degrees, I will wear a coat outside).

When writing lisp, you base all programmatic logic on predicates (technically this is true for any language, but because of the prefix syntax all predicates FEEL like predicate functions rather than X == Y). In mathematics, a predicate is either a relation or the boolean-valued function that amounts to the characteristic function or the indicator function of such a relation. This is essentially true in lisp as well: a predicate is a function that returns a true or false value regarding a certain characteristic of it's parameter(s). False is represented by NIL, and true is anything that is Not-NIL (most commonly represented by T).

There are two basic types of predicates in Lisp: Data-type predicates and equality predicates. Data-type predicates take one argument and determine whether or not that argument qualifies as the specified datatype. Here are some examples:

> (numberp 5)
> (numberp 'FIVE)
> (stringp "hello, world")
> (stringp 100)

"numberp" is a predicate function that returns T if the argument is a number, and NIL if it is not. Likewise, "stringp" determines whether or not the argument is a string. Most data-type predicates are written with the convention seen above; namely, the data-type name followed by a "p" for predicate. The two common exceptions are "null" which determines whether or not an expression evaluates to NIL, and "atom" which determines whether or not the argument is an atom (a single value, indivisible into smaller parts...basically, not a list). Here are some common data-type predicates built into the lisp language:


The other commonly used predicate type is the equality predicate, which takes two arguments and returns T if they can be considered equal (otherwise, nil). The equality predicates are listed below from least to most general:


"eq" returns true only if the memory addresses of the arguments are the same, whereas equalp (being the most general) will return true if two lists have the same dimensions and contain items that are in the same order and print the same.
Now, those functions don't become useful until we can choose to take action based on their result.

Here's a few examples of using equality predicates:

> (eq 5 5)
> (eq 5 5.0)
> (eql 'A 'A)
> (eql 'A 'B)
> (equal '(A B) '(A B))
> (equal '(A B) '(A C))

Now, these functions are only useful when we can make decisions about what code to execute based on their result. These structures are called conditionals, and the most common one is "if". The "if" macro takes three arguments: a "Test" (which is usually a predicate), code to execute if the test is true, and code to execute otherwise. Here's a simple example of what this looks like:

> (if (numberp 10)
(+ 5 2)
(- 4 3))

This is certainly a trivial example, but it shows the correct formatting of an if statement. The example could be read as "If 10 is a number, execute a sum function with the parameters 5 and two, otherwise execute a subtraction function with the arguments 4 and 3". Now here is a more complex example:

(defun our-member (obj lst)
(if (null lst)
(if (eql (car lst) obj)
(our-member obj (cdr lst)))))

Here is a function definition that uses a nested if statement. This pattern could be carried out indefinately, but past one nested if statement it probably makes more sense to use a "cond" macro instead.

Extra credit if you can figure out what the above recursive function does.

Saturday, January 19, 2008

Lisp-ing, Part 2: basic list manipulation

This is Part 2 of my tutorial series on lisp.
<<PART 1 PART 3>>

Within the lisp language, everything is represented as a list (hence the name, LISt Processing language). Given this as a starting point, it would be natural to assume that lisp has a host of built in functions for the manipulation of lists. This is correct, in a way, but it also is true that all the powerful functionality that is built in for lists (reverse, append, remove, first, last, etc) is reducable to three basic functions: cons, car, and cdr. The reason lisp was developed this way is because it has a slightly different paradigm regarding what a list is that what we would usually think of.

Imagine a long length of string drawn taut between two poles driven into the ground (like a clothesline). Now picture 10 plasic drinking cups hanging from the line at peridoic intervals. This is usually what people think of when they hear the word "list": A series of identical "slots" into each of which one item could be placed. Now replace the picture in you head with 10 treestumps that you can put things on, each one with a branch sticking out of one end, and each brach with a piece of string leading from it to another stump (except for the last one in the sequence, which has a string leading to nothing). This is more of a lisp list. Each item in the list has two things: a value (the item ON the stump) and a link to the rest of the list (the string leading from the branch to the next stump).

Therefore, you're "list" of 10 items on treestumps is actually a "pair" of things: an item on the first tree stump, and a pointer leading to a list of 9 other tree-stumps with things on them. So what is that 9-tree-stump list? It is ALSO a pair of items: the item on the first tree stump, and a pointer to a list of 8 other tree-stumps with things on them. This subdivision goes all the way down to the last tree stump which is STILL a pair of items: the item on the tree stump, and a pointer to nothing (NIL).

Now, why is this distinction important? Because the three basic list-manipulation functions in lisp are based around this paradigm. Look at the following function call:

> (cons 'B '(O R D E R))
(B O R D E R)

It takes two arguments (an element and a list) and puts them together as a list. Basically it's the same as putting somthing on another tree-stump ('B) and tying that tree-stump's branch to the first stump in the original list. How about these two:

> (car '(L I S P))
> (cdr '(L I S P))
(I S P)

the CAR function takes a list as a parameter, and returns the first part of the pair (the thing on the first tree stump). the CDR function takes a list as well, and it returns the SECOND part of the pair (the list of other tree stumps that the first one points to). If you think about it, you'll see why this is important: you can do pretty much anything to a list with those three functions. They are the building blocks of any functionality you need regarding a list. Here is a good example, it's one of the practice exercises from Paul Graham's "ANSI Common Lisp":

(defun our-list-copy (lst)
(if (atom lst)
(cons (car lst) (list-copy (cdr lst)))))

we've just written a new function called "our-list-copy" that takes a list as a parameter and returns a copy of it by recursively traveling through the list and "CONS"-ing each first-element (CAR) with a copy of the rest of the list (CDR).

This kind of development is core to what lispers describe as bottom up programming. You start out writing simple, minimalist functions. As your needs become specific you combine these low-level functions into more powerful functions, which are in turn used to write domain-specific functions, which in turn become a kind of language for the construction of your top-level program. basically, you're adding to the language as you go.

Friday, January 18, 2008

Lisp-ing, part 1: simple math

This is Part 1 of my tutorial series on lisp.
PART 2>>

Yesterday I wrote (mostly in jest) that I was so frustrated with a problem at work that I "wished my whole company was built on lisp". Long story short, I caught a lot of flack from my friends who noted (correctly) I have never built anything of substantial size in lisp so I have no grounds for bringing it up.

Now, I've never been a fan of that kind of logic in the first place. Just becuase I personally haven't done it before doesn't mean that it's a bad option. Laying that point aside, I've decided to put my code where my mouth is and get my brain up to speed on lisp. Then I'm going to build something cool, and THEN I'm going to not feel guilty about suggesting it for projects in the future.

So, with the help of Paul Graham's "ANSI Common Lisp", I've started down the road away from imperative programming once again.

Syntax Synergy

The thing that I think scares most every-day hackers away from lisp is the (comparatively) very odd syntax it's written in. One of the simple functions I wrote while working through the first couple chapters looks like this:

(defun list-copy (lst)
(if (atom lst)
(cons (car lst) (list-copy (cdr lst)))))

To someone who's used to the simula syntax languages (as most programmers are), this can be a very challenging paradigm shift. All those parentheses hovering around, seemingly providing random groupings of keywords. It's like this for a pragmatic reason, though, and in the end it's actually simpler than the languages you might be used to. You don't have to know where to use braces, brackets, parentheses, or dots. Everything is grouped by the same sytax, and everything is a list. It takes some getting used to, but if you work with it for a couple days your brain adapts and starts to actually compose your code ideas this way.

Simple Math

Step one when picking up a language is usually to work with the simple things that are easily defined. Namely, mathmatics. It's an easy way to break into an unfamiliar workspace since there's usually a lot of commonality across languages and it can be a more gentle introduction than jumping right into a catalouge of basic functions. This is true when moving to Lisp, but perhaps less so than other languages because most languages stick to the elementry school standard "5 + 4" for doing simple arithmatic operations. Lisp, true to form (LISt Processing language), treats arithmatic operators just like any other function call. Standard syntax is something like this (the carrot is actually the prompt in the interpreter and is not part of the language):

>(function arg1 arg2)

so addition is done like this:

>(+ 5 4)

this can feel strange since we're used to a different order even in verbalizing the expression (Five Plus Four), but think about the benefit of prefix notation when you start to add extra arguments. Most languages chain together addition like this:

5 + 4 + 3 + 2 + 1

Notice any redundency? We've used the "+" operator 4 times in one "sum" operation. Using lisp, this same expression becomes:

>(+ 5 4 3 2 1)

more concise, and actually more natural when you think about the way you do your sums on paper when you have no calculator:

+ 8

why express over and over that you are STILL doing addition? why not just use the operator once? Subtraction is done the same way, but is a little more akward since it's not commutative the way addition is, and since we usually think of a dash at the front of an expression when we are signifying a "not" opertation on the result.

>(- 5 4)

and multiplication and division are in the same vein:

>(* 10 5 2)
>(/ 30 6)

Pretty simple, but enough to get your feet wet and to start practicing with a new syntax. Next time we'll go just a little deeper and look at list manipulation.

Thursday, January 17, 2008

Painful Static

There are some debates that have been going on in the software realm for some time, and one of the most famous is about whether it is more valuable to use statically or dynamically typed languages. I would not be so bold as to declare that I have a definitive answer to that question, but I had an experience today that has certainly caused me to lean in the direction of the dynamic side of the debate.

I was working in C#.NET at my job, I had the happy job of writing a service layer between an existing application and a third party service. Basically I had an interface that my code would be required to conform to and it looked like this (method names altered to protect the innocent):

public interface IDataInterface
  ICollection<T> Query<T>();

and I had a method I had to end up calling that looked like this:

public class ThirdPartyClass
  public static IList<T> Fetch<T>() where T : BusinessClass<T>...

I knew for a fact that all the necessary code was in place within the application to call the interface methods at the right time, and all the code behind the fetch method would work correctly when called. All I had to do was write an implementation to plug the two together. No problem, here it goes:

public class DataInterfaceImpl : IDataInterface
  public ICollection<T> Query<T>()
    return ThirdPartyClass.Fetch<T>();

So now I'm thinking "OK, done! Now I just need doesn't compile." Right, of course it doesn't compile, because the Fetch method specifically constrains T to be a subclass of something else! Well, now we have a real problem. It doesn't matter that the class I'm querying for ACTUALLY DOES descend from BusinessClass, because the compiler can't know that when it's putting the code together. All it knows is that the Interface calls for an unconstrained generic parameter, and the method I need to call requires a constrained parameter. Believe it or not, this drove me crazy for the next couple of hours.

That DataInterface is used everywhere, I NEED to conform to it. I could make it the interface ITSELF generic, you know:

public interface IDataInterface<T>
  ICollection<T> Query<T>();

Then I could have my implementation do this, which actually compiles:

public class DataInterfaceImpl<T> : IDataInterface<T> where T : BusinessClass<T>
  public ICollection<T> Query<T>()
    return ThirdPartyClass.Fetch<T>();

Ok, so this could work because all the other implementations already in existance could just specify that they implemented IDataInterface, except with "object" as their generic parameter.

It doesn't fly, though, because my team is working with an injection framework that stores services according to their interface name. I need to be able to ask for DataInterface anywhere and get back whatever the current system implementation is, and my solution won't fit because these two calls are not interchangeable:

this.dataService = Services.Get<IDataInterface<object>>();
this.dataService = Services.Get<IDataInterface<BusinessClass<T>>>();

Here I am, banging my head against the desk because I have no way to tell the language "I know what I'm doing, just trust me and when runtime comes I PROMISE I will hand you an object that matches what you need!" I would have saved several hours of misery if I had been using Python or Ruby because I just wouldn't have had this problem in the first place! Very frustrating.

Don't get me wrong here. I think having strong typing is useful, and IDE's are more helpful when they have type information to look at. I'm just saying that there is a cost, and examples like this show why some days I wish my whole company was built on Lisp.

Wednesday, January 16, 2008

Relational vs Object Oriented Databases

People often think of an employee's productivity as a function of their job satisfaction (those who like their job will be more productive), but based on my day to day experience I'd contend that the exact opposite is true: my job satisfaction is a function of my productivity.

Today I spent the majority of the day wrestling with an ORM tool, and I didn't get very far, and as a result I don't like my job very much right now.

This is a particularly painful mar on my workplace happiness because of the very positive experiences I've had recently with DB4O (Database for Objects), so I'm going to compare and constrast a little bit to vent my pain.

Writing the data layer

Usually when you're writing an application on an enterprise level, you're going to have to write some sort of intermediary data access layer in order to transform the data in your storage medium into the business objects that get passed around the rest of the application. When building an application (in an OO paradigm) with a relational database, you have several steps to go through.

1. Get a database instance up and running.
2. Create the tables you will use to hold your data.
3. Define the relationships between those tables.
4. Code the objects that will represent your data in the application.
5. Write all the code necessary to take an instance of each entity object and shred it's state into the correct tables in the database.

After all that, you can get down to the business of putting together an application to deal with that data. Despite the fact that it only took 5 steps to deliniate, this can be a pretty painful process, and it is exponentially more so the bigger your object model is. There can be hundreds of lines of code just dedicated to the task of sending data to the database.

Now, when you want to start a new application using DB4O, here is your data layer (my examples for this article will be written in C#.Net):

IObjectContainer db4o = Db4oFactory.OpenFile("[filename].yap");
Person person = new Person("Ethan Vizitei");

That's it. That object is now stored in a working database. If that file specified doesn't already exist, it is created, and each object you send to it is saved as-is. I don't have to write anything to map the fields to another medium, I don't need to build complex DAOs and write a lot of iffy test cases around them. It just works.
This should appeal to anyone who doesn't want to consider OO databases because they hate having to master new technology. How much easier could that get?

Querying for data

Ok, now that we have something stored in the database, I want to extract data that's previously been persisted.

Working with an RDBMS, I'm going to have to generate some SQL one way or another. What I WANT is a set of objects that meet a certain criteria, but what I have to DO is open the code in my data-layer, look up what fields in my objects map to what data in the database, and write some SQL to do the work.

How would an object database handle this? Let's say I want to find all the people who work in the IT department at my company. I have an object "Person" with a property called "Department". I could do this one of a couple ways. I could query the DB4O file with an example object:

Person prototype = new Person();
prototype.Department = "IT";
IList people = db4o.Get(prototype);

or I could just write a predicate:

IList people = db4o.Query(
delegate(Person person)
return "IT".equals(person.Department);

Either way I now have the list of people I wanted without any extraneous work. It's all done with objects, which is the natural way a programmer is thinking when he's working in an object-oriented paradigm. Your data IS an object, not a flat representation of an entity.

If you're a java or .Net developer who sees the development savings inherant in NOT spending a third of your time writing data-access code, consider giving db4o a shot on your next prototype. It's at least worth a once over if you haven't tried it before.

Tuesday, January 15, 2008

Ethan Vizitei, 101

Hey, Who do you think you are?

Seriously, that's what it feels like. I must prove myself to you, the reader, or you will leave and not come back. Basically, if I don't impress you in the first few sentances, I will never see you again. The simple act of publishing a blog for public consumption places this weighty feeling of responsibility on your shoulders.

Why do you deserve to comment on the goings on of the technology world?
What right have you to voice your opinion?

Honestly, I don't know. I don't consider myself a guru or an expert. This blog is just me telling everyone that I think I have useful things to say about software and the world we model with it, and here is where you can find those allegedly useful tidbits. Of course, you may disagree with me (more about whether my ideas are useful than whether they are present at this web-location or not), and you know what? I hope you do. If the world is going to benefit from my opinion then I deserve the benefit of other people's opinions about my opinions as well (you could continue this thought recursively, but it's probably not worth the effort).

Seriously, who are you?

I'm Ethan Vizitei. I live in Columbia, MO with my wife, my dog and my laptop, and very little else of importance (you'll find that's true about Missouri). Don't let my midwestern location fool you, though. I'm a software developer during the day (Triple-I consulting) a Computer Science student at night (Columbia College) and a startup founder on the weekends (Zetassociates, LLC.).

Are you one of those technology bigots?

No, although I know plenty of them. I personally enjoy writing on a lot of different platforms. I've written enterprise apps (Desktop and Web) in Java and C#.NET, small webapps in Ruby, and all kinds of personal projects in C/C++, Lisp, python, and others. I run both linux and windows, and my wife runs a Mac. I think flamewars are a waste of time, and I'm not interested in getting dragged into a debate about why X-language is infinitely superior to Y-language because of Z-list-of-ridiculously-exaggerated-features. You work in whatever makes you most productive, and I will do the same.

Ok, so why do you have a blog?

Based on the credentials listed in the above paragraph, I hereby declare myself opinionated and outspoken on the subject of software. I want people to know my thoughts on the writing of code, the craft of code, the career's built on code, and the occasional personal detail that I can dubiously link to code in one way or another.

Also, I have a terrible memory, even for my own thoughts, so this should help me maintain a record in case I say something smart that I'd like to go back and refresh myself on.

Better question, why should I read your blog?

Ah, NOW you're to the meat of the problem. I think my thoughts are interesting, that's why I'm going to write them down. Lucky for you, if you don't find them as interesting as I do, you are under no obligation to come back. Your browser, your rules (no hijacking, I promise). However, I hope you stay, and furthermore that you leave comments, because I plan on doing this for a long time and it's only going to get better if people tell me when they're getting bored.

Wow, I'm convinced

Hey, that was easy! Great to have you as a new friend and collegue! I guess you may have noticed by now that I'm also voicing your half of the conversation so you will say anything I decide you do as long as you continue to read anything you see in bold as your inner monologue.

Why "CodeClimber"

I like writing code, and I like rock-climbing, so it's the natural synthesis of two of my favorite things! (look, lots of the cool names on blogger have already been taken, I'm going to have to ask you to work with me here...)

How can I contact you if I want to say something unrelated to a blog post?

my email is, I'll be happy to talk about all kinds of things on all kinds of subjects (yes, even things that are not software related), so please only leave comments that relate to the blog.

Is there anything I can buy from you?

Well, now that you mention it...

Actually, no. I'm not selling anything. In the future, if Zetassociates comes out with some software that people find useful, I hope you will buy it, but I will try and keep plugs to a minimum.