The Endless Obsession

better code

.NET: Getting the Project Root Directory at Runtime

When developing in .NET, whether it be an ASP.NET website, or a console application, you usually don’t have to access the file-system directly. One reason that you would is to read a configuration file, but, .NET configuration management handles this for you via ‘Web.config’ or ‘App.config’ and the ConfigurationManager. Another scenario is loading a resource, like an image or document, from disk. But, again, the framework provides a nice solution via embedded resources.

If you’re considering accessing the file-system you would do well to consider if there are better ways to do what you’re trying to do. That being said, you may occasionally have a legitimate reason to access the file-system directly.

Here are a few simple examples of how to do this:

ASP.NET

var root = HttpContext.Current.Server.MapPath("~/");

WinForms

var root = Application.ExecutablePath;

WPF, Console Application, etc.

var assembly = System.Reflection.Assembly.GetEntryAssembly();
var assemblyPath = new Uri(assembly.CodeBase).AbsolutePath;
var root = System.IO.Path.GetDirectoryName(assemblyPath);

So, why would you even want to do this?

Continue reading

Removing a Password from a Git Repository

It’s generally considered a bad idea to commit passwords, api keys, etc. to your source code repository. There are various ways you can try to avoid this (and you should), but sooner or later its going to happen. Someone will add some “sensitive” data to a repository.

For some projects it doesn’t matter quite so much. If they’re not hosted publicly on a site like github and never make it out of your company’s internal network, then its not that big of a deal. For others, it is important, and it could be that your only option is to reset the passwords, revoke the API keys, or something to that effect. What a pain!

There is another option to consider if you’re using git. Since git allows you to rewrite history, you can rewrite the repo to make it look like the password leak never even happened. This should only be done if you know the data hasn’t been leaked and the repo isn’t propogated all over your organization (rewriting history will cause people grief if they have the repo already).

Let’s say your history looks something like this:

c6 = add test cases
c5 = tweak theme colors
c4 = add password to config * <- commit of the data leak
c3 = fix spelling error
c2 = prototype * <- last commit to config before the data leak
c1 - add README

* = involves the offending file

Start by getting the last version of the config file before the password was added to the config file.

git checkout c2 /path/to/config

…or, remove it manually.

Then, commit that change:

git commit -m "remove password from config"

Now your history looks like this:

c7 = remove password from config
c6 = add test cases
c5 = tweak theme colors
c4 = add password to config * <- commit of the data leak
c3 = fix spelling error
c2 = prototype * <- last commit to config before the data leak
c1 - add README

* = involves the offending file

Now, rebase the commit where the password was introduce (c4 in this example):

git rebase c4~1 -i

This opens up vim, or whatever your configured editor is.

pick c4 add password to config
pick c5 tweak theme colors
pick c6 add test cases
pick c7 remove password from config

A bunch of other stuff...

Move the “fix” commit (c7 in our case) to the line below the one where the password was added (c4), and change the prefix from ‘pick’ to ‘fixup’.

pick c4 add password to config
fixup c7 remove password from config
pick c5 tweak theme colors
pick c6 add test cases

From github’s rebase documentation, this tells git to use this commit to “fix” the prior commit, and then discard it.

The result is that the history looks like it did before, but the password is no longer in the config file on disk, or in git’s history. Also, if the only change to that file in the offending commit was the password which you removed, then the file will no longer show up in that commit (as you would expect).

References:

A Commentary on Comments

// TODO: Improve post intro before publishing to the interwebs.

Comments are probably one of the most boring topics you can imagine.

Plus, they’re so easy to write?

And yet, somehow I get the feeling we’re doing it wrong.

Have you ever tried in vain to understand why a bit of code was doing something that made no sense, and wished that the author had left you a clue as to their intentions at the time (in the comments, perhaps)?

Or, have you ever looked at a bit of code, and then the comments attached to it, and then back at the code, and then the comments… and wondered, “Am I not getting something about this code, or is this comment just plain wrong?” (Hint: the comment is probably wrong.)

Finally, think back to the last time you looked at a comment and thought to yourself, “hmm, that was actually useful”.

Continue reading

So Long Octopress, Hello Wintersmith

Octopress has been a popular hacker’s blog for quite a while, and so I made the switch a few years ago in order to escape the clutches of Wordpress. Now I have a hack-able blog that also costs me nothing to host.

Unfortunately, Octopress isn’t a good long-term fit. At the end of the day the deal-breaker for me is the fact that it runs on ruby.

Don’t get me wrong, ruby itself is just fine. But, if you’ve worked with ruby on Windows you know that its full of pain all the way down. If you’re thinking about it, do yourself a favor and just don’t do it.

Sure, I could have run Linux on a separate partition, or on a VM, or in the cloud, etc. I have no problem doing that, except that it requires managing yet another machine/OS. I’m already investing enough time automating Windows-based workstations, so why waste spend my time on it if I don’t have to?

So I set out to find an alternative that would work reliably on Windows, and run on node.js. I targeted node because it generally just works on Windows, and node and JavaScript in general are skills that I’m more interested in developing. What I found was Wintersmith.

Continue reading

Octopress post excerpts and 'couldn't parse YAML'

This evening I noticed a syntax error in my blog’s ATOM feed. The error seemed to stem from the text in one of my more recent posts, what exactly I didn’t bother to determine. However, I also noticed that the post summaries were rather large and didn’t appear to do a good job of summarizing post content in any case. So, it seemed the best thing to do was to figure out how I could get Octopress to use a better summary and kill 2 birds with one stone.

I discovered that the YAML that lives in each post’s header (more on that later) could include an “excerpt” property where I could write my own summary of the post.

For example, here’s the header of this post:

---
layout: post
title: "Octopress post excerpts and 'couldn't parse YAML'"
date: 2013-10-15 21:14
excerpt: This evening I noticed a syntax error in my blog's ATOM feed. The error seemed to stem from the text in one of my more recent posts, what exactly I didn't bother to determine. However, I also noticed that the post summaries were rather large and didn't appear to do a good job of summarizing post content in any case. So, it seemed the best thing to do was to figure out how I could get Octopress to use a better summary and kill 2 birds with one stone.
comments: true
categories: 
---

This worked so well that I promptly went through all of my posts and threw up some quick excerpts without much thought. Unfortunately I soon ran into an error: ‘parse’: couldn’t parse YAML.

I hadn’t thought about it until now, but the configuration at the head of each post is YAML. I don’t know much about YAML, but apparently something was wrong with one of my excerpts.

Fortunately I was able to track it down to the “:” character, which is somehow significant in YAML syntax. YAML Lint was an invaluable resource.

File.OpenWrite Gotcha

Recently I ran into an odd problem when writing a text file to disk using .NET’s File.OpenWrite.

using (var fileWriter = new StreamWriter(File.OpenWrite(outputFilePath)))
{
    fileWriter.WriteLine("abc");
}

You might expect that after executing this code the text in the file would be “abc”. Not quite. In my case I was sometimes seeing results like this…

abc
some other text

…where “some other text” is the last bit of text in the file before writing.

It turns out that the documentation for File.OpenWrite contains the answer.

If you overwrite a longer string (such as “This is a test of the OpenWrite method”) with a shorter string (such as “Second run”), the file will contain a mix of the strings (“Second runtest of the OpenWrite method”).

OpenWrite behaves much like the dreaded insert mode in word processors and text editors.

The solution that I chose is pretty simple. Just clear the file’s contents beforehand.

Chrome select excessive padding

Recently I noticed that dropdowns (<select>) in chrome suddenly had an excessive amount of padding on my Windows 7 PC. This is what it looked like.

Screenshot of excessive padding

I did some digging on the web and found a chromium bug report.

One of the comments near the end of the page gives the solution to the problem.

Actually that is the problem. Windows has a service called “Tablet PC Input Service”. When this is running Windows 7 thinks it’s a tablet instead of a desktop. Once I turned this off and restarted chrome the drop down spacing is correct. But as soon as I turn the service back on the extra spacing comes back.

Sure enough, there was a service running on my laptop called “Tablet PC Input Service”.

Screenshot of excessive padding

…and when I stopped the service and restarted chrome, the padding went back to normal.

Screenshot of excessive padding

My laptop is a touchscreen and turning off this service doesn’t seem to affect touchscreen functionality. Don’t forget to change the service startup type to “Manual” or “Disabled” so that it doesn’t start back up the next time you log on.

This Google Groups post might also be relevant.

Should you write awesome code?

I ran across this provacative post on the tubes: How I stopped writing awesome code. The following is my reaction and thoughts (hastily thrown together). Go to the link and read it first or what follows probably won’t make much sense.

I certainly agree that some of the practices he mention can have little benefit in the short term. Many people pointed out in the comments that the short term gains of “trimming the fat” can come back to haunt you on long term projects, and I’ll second that.

Someone in the comments described how they have frequently inherited projects where it looked like the developer(s) had this mindset and it usually turned out to be a mess/headache to maintain. I think the mental burden he described to understand concepts/tools like IOC, ORMs, etc. is only one side of the coin. The other side is that not following best practices and using powerful tools that are available can be just as frustrating to a developer who inherits the project (or even your future self), especially in the event that the app has to be significantly enhanced or changed. That said, I have also had the experience where I found that I had over-engineered a project too early and ended up “stuck” with some of those poor decisions later on. I think the key is to not try to over-engineer too early, but rather follow only tried and true best practices that require little added effort and could yield future gains, improve and refactor as the project grows, and never be satisfied with where things are today. For what it’s worth, in the case where I made regrettable engineering decisions early on, I was also deviating from common/best practices, i.e. trying to be clever.

As for F12, I don’t think you can ever really get around the limitations of the IDE with respect to interfaces and abstract/virtual members. I have been in the habit lately of using Shift+F12 (symbol search) rather than “Go to Definition” unless I know for sure that there is a single implementation of the member that I’m trying to get at. Resharper is also a good choice to improve the IDE experience. As irritating as F12 can be, I wouldn’t use that as reason to avoid using useful language features.

Finally, to tie it together with an anecdote: I can think of two projects off the top of my head that seem to fall into the too extremes. One was hastily thrown together by a freelance developer to meet a client need. The source control wasn’t hosted on a popular OSS hosting services, it was just offered up for download on the blog. There were no unit tests. Most of the code was in a single class file with a number of supporting classes that were essentially just data containers. On the one hand, I ended up having to significantly enhance that project and found it to be very frustrating and painful. On the other hand, if he hadn’t thrown it together it may be that nothing like it would have existed at all and I would have been forced to start from square 1 (and making my own mistakes as a result). The other project was a highly engineered and conformed to a highly detailed open specification, complete with interfaces, IOC, and unit tests galore. On the one hand, when I discovered a bug I was able to easily verify the bug via unit test, fix it, and see it go green. On the other hand, the project is not very active now (has been somewhat superseded by other technology) so you could argue that the effort was wasted.

Its all about balance…

Also, someone pointed out that its possible to write tests as UI acceptance tests that actually interact with the web browser (using Selenium) and mimic user behavior. I happen to use WatiN instead in my day-to-day, but what’s important is that you can verify the behavior that your clients are going to be looking for in an automated fashion. Do it!

Thanks to Jonas for provoking critical thought! :)

Carolina Code Camp 2012 Presentation

Recently I co-presented with Bobby Dimmick at the annual Carolina Code Camp, put on by the Enterprise Developers Guild of Charlotte. The event was wildly successful, with sold-out attendance and a day fully booked with speakers.

Pictures from the event

Our presentation was entitled “Building rich, model-centric, event-driven web apps using EF, Razor & open source”. The approach we took was to build a demo application from the ground up, and document the process so that it was reproducible. The app is hosted at todoapp.exosuite.com and the walkthrough is at todoapp.exosuite.com/walkthrough.

Screenshot

You can build the app yourself by following the walkthrough. I promise, I did it many times! :)

Learning F# with TDD: Part 2

Last time I talked about setting up F# testing using NUnit, TestDriven.Net, and NaturalSpec. This time around I’ll elaborate a little bit on the testing aspects, and also talk about active patterns.

Continue reading