Tuesday, February 19, 2013
Wednesday, January 23, 2013
I'm writing a metagaming webapp for my current game obsession which relies heavily on the user's display width. Luckily, the monthly results of the Steam Hardware & Software Survey are posted online for public use. (In this post I'm using results from December 2012.) I figured this would be the perfect way to get a good idea of how wide the typical user's browser window was likely to be. As it turns out, it is an excellent way, but it was a lot simpler than I expected...
The first thing I did was go through the list of primary monitor resolutions, combining them by width, and plotting their width against the percentage of the total.
Quite surprisingly, users' display size is all over the place. This isn't the width they decide to keep their browser window, but the maximum pixel width of their primary display. Either way, it's fairly safe to assume that most users with smaller monitors run their browser at full screen.
To keep things somewhat sane, I decided to completely ignore the multi-monitor statistics. Most importantly because I think Steam's survey counts multi-monitor users' primary monitor in the actual primary monitor statistics, but also because, even if they aren't grouped together, there is nothing to tell me what percentage of users have multiple monitors versus a single monitor. Grouping the calculated primary monitor width of these users in with single-monitor users would pollute the numbers: 50% of two plus %1 of one hundred does not equal 51%. Without knowing the actual count, I can't really add them.
Because of my application of this data, the final number I need to come up with is the largest pixel size that will fit on a significant majority of expected users. Or, in simpler terms, what is the biggest width that still fits most monitors?
Initially, I was going to do all sorts of calculations based on the pixel width of the typical browser's window border, the scroll bar size, how many users would have the scroll bar shown, based on their display height, etc. But, looking at the chart, it's really a lot easier than that:
One in twenty users still have a 1024x768 monitor. Therefore, assume the maximum allowable width is about 1000px.
That's about it, really. All of this expectation that I was going to calculate the exact pixel width I could use to derive precise measurements for my webapp boils down to "assume you've got 1000 pixels".
Oh well, at least it's a nice, round number.
Saturday, January 12, 2013
Because I'm working in shared-hosting environments, I typically deploy my Camping apps on servers running Passenger, so I run them as Rack applications with
local.ru files to establish database connections and such.
Camping's migration definitions work well when you're using the Camping server, but aren't automatically run when your app is loaded with Rack. This is fairly easy to solve:
ActiveRecord::Migration.migrate(:up) goes in your app's Models module.
def self.create; Models.create_schema; end goes in your app's module, and
MyAppModuleName.create goes in your rackup files.
This will run any non-applied migration when the application starts. It's my temporary solution to a complex problem (database consistency is hard), but my workflow treats the repository's master branch as "always deployable", so it is of primary importance that
git clone ...; rackup be able to fully and correctly deploy my web apps.
Monday, January 7, 2013
Test::Unit, but I haven't practiced enough to feel confident of my understanding.
One thing I didn't quite understand is the relatively "explosive" nature of assertions: Especially in Ruby, with its strong metaprogramming capabilities, there are a huge number of assertions which seem to be used for a specific, typical behavior. For example,
assert_not_nilis used to test for the specific case that the given object's
nil?method returns something other than
assert_respond_toseems like it could just as easily be written as "
assert obj.respond_to? meth" instead of the less-readable "
assert_respond_to meth, obj".
Once I read the source (Luke), I realized that these substitutes really are just tacking on a bit of behavior to a simple
assert_not_nilliterally contains the phrase "
assert !exp.nil?", and while some of the helpers like
assert_equalprovide a little more help than others, they all boil down to the same base. This simplified the entire concept of unit testing down to a single sentence:
Assert that things are the way they should be, otherwise fail.
Now I can continue applying unit testing and progress into the domain of specification.
Friday, January 4, 2013
Last night I switched to an EVGA-branded GeForce 650 Ti and before I go into any criticism, I have to say that I am loving the massive performance boost. Every game so far has been run on the highest graphical settings (minus anti-aliasing) with nothing but buttery smoothness.
The complaint that I have is I didn't realize that Nvidia is so far behind AMD in user-facing software it is shameful. The majority of PC-heavy gamers I know use their PC for more than just keyboard-and-mouse gaming. I use multiple monitors for productivity and connect my PC to the TV to use Steam's fantastic new "Big Picture mode" nearly every day. With the Radeon I went through the configuration process and, quite quickly, set it up so that when I plug in HDMI to the TV the video card picks up the configuration change, turns off the two side monitors, and duplicates the primary monitor to the TV. Unplugging HDMI triggers it to remove the duplication and turn the side monitors back on.
All of this happens automatically with a single interaction from me: Physically plug in the HDMI cable.
On the new GeForce card I also have to physically plug in HDMI to the TV, but then I have to follow at least half a dozen more steps: Open the control center, go to the correct menu, disable the side monitor (checkbox and apply), enable the TV output (checkbox and apply), go to another menu, and set up duplication (combo box and apply). Reversing the process after I'm done takes the same number of steps, and roughly ten minutes to get everything synchronized.
The GeForce process results in hundreds more places for things to go wrong. There are significantly more chances to miss the menu, forget how the setting is worded, or any number of other tiny frustrations. It may seem like something one shouldn't get worked up over, and you'd be right, but when the competition so clearly outstrips something you spend a good deal of money on, it's a pretty big frustration.
The solution is simple (from a user's perspective): Add the ability to create a profile of the current settings, so I can just select the "1+TV" profile or the "1+2+3" profile and restore the respective settings.
Wednesday, January 2, 2013
Like many people I work with, I tend to be ruthlessly pragmatic toward change. There is a single rule for changes we are considering: It must benefit someone in an effectual way. No CYA regulation, don't avoid the real problems in favor of ineffective "solutions", and absolutely no accepting excuses. If it needs to be fixed, it needs to be fixed, and neither time-constraints or difficulty should get in the way of providing a better experience for our customers.
We run into problems when we try to interface this "make it work" approach with regular people, who hate any change they think is unnecessary. They don't want to change things because they usually don't see the benefit, don't understand why we want to change what already works, or (very importantly) already have too many things to do and would really rather not bother with our changes. Most of this checklist can be found on just about any productivity blog, but this specific blend works well for me, so I thought I'd at least get it down somewhere.
One final note: If you can check every single item you've either grossly overestimated your preparedness or underestimated your idea. Now ignore that last sentence, be bold, and learn from your success.
Is my idea ready to submit?
- Can your co-worker explain your own idea back to you? If not, it's far too complicated for anyone else. Their five-year-old should be able to explain the concept, in detail, to a crowd of high-school students fifteen minutes before the bell.
- Do you know how you're going to actually implement the idea? "Yeah, we're going to..." is not the answer your boss wanted. Show a plan. They're not going to read it, but they want to know it exists.
- Have you actually used the new version to do the work? It's easy to say a new program will be faster, easier, or do everything you need, but you really don't know that until you've actually done the work. Software people call it "eating your own dog food", and it is of vital importance.
- Why (honestly) are you making the change? Great, you think you found a better way to do something. The first question you're going to get will end with "but why?" Be able to answer this confidently enough that the other party understands the effect, or at least agrees that you know what you're doing.
In my experience, the trick to pushing through "inconsequential" changes like font size or specific wording hinges on the explanation. "Because it looks better" is a terrible reason; instead try "it matches the style in the other documents, which looks more professional".
- Who will this change affect? Do you know, or are you guessing? Because it probably affects more people than you realize. Only HR uses the employee database... except when payroll uses it... and central management uses it... and public affairs uses it... and-- You get the idea.
Tuesday, April 17, 2012
Is this for me?Do you care why Git is better than other options, want to know how Git works, need a detailed guide to Git, or just need to translate SVN commands to Git? This isn't for you.
Otherwise, if you only want to start using the damn thing, continue on. But first: I assume you already have Git installed. If you don't, start from git-scm.com and come back when
git --versionreturns something.
Some quick definitions
- Repository (Repo): Every file and change git remembers, including the entire history of how it got to this point.
- Working directory: The contents of the directory on your machine. Git doesn't remember this, and laughs in your face when you overwrite it, accidentally.
- Staging area (Stage): A place to store changes you're about to tell git to remember. This is also called the "index".
Making a repository from scratchPut any existing code into a directory, then go in there and run
git initto tell git this folder will store stuff. It won't remember anything, yet.
Grab an existing repository
git clone ADDRESSwill download the entire repo (including the history) and make a local, offline copy. After you duplicate it, you can grab new changes with
git pulland send your changes back with
git push. If you don't have permission to push changes, you might be able to send a pull request.
If you're grabbing a repo from github, the address will probably look like
firstname.lastname@example.org:USER/REPO.gitand you probably won't need permission to duplicate it, since most repos on github are public.
What does git see, right now?With the default configuration,
git statuswill report what git can see: Files it isn't watching, files you've changed since it last remembered them, and some other details about the current state of the repository.
If you want more detail on the difference between your files, the stage, and the repo,
git diffwill show you a line-by-line view.
Get things ready to remember
git add FILESwill add the current state of the listed files to the staging area. You can use a period (
git add .) to tell git to stage all the files in this directory, or just tell it the specific ones you plan on storing.
If you change a file after staging it, you have to
git addit again to get the latest changes.
To take things off the staging area, use
git reset somefile.
Remember the staging area's changes
git commitwill tell git to store the changes on the stage into the repository. This makes an entry in the history (called a "commit") with a unique hexadecimal identifier so you can refer to it later.
Get the latest updates to a repositoryIf you did a
git cloneon someone else's repository, you can
git pullto grab all the updates and apply them to your copy. You should make sure your working directory is clean before pulling, otherwise we have to discuss the stashing or rebasing process.
Tell someone else about your commitsIf you cloned another repo, you can
git pushto send all your commits back to the origin... If you have permission, of course. Otherwise, you can tell the owner about your changes and suggest they do the opposite operation:
git pullyour changes into the "main" repository.
Anything else?You should have just enough information to be dangerous. Start tracking your changes, collaborate with others, and learn by doing. You can almost always recover from a mistake with git, so don't worry too much about screwing up the repository. Push buttons and break things, then learn how to fix them.
One of git's best features is creating "branches" for new ideas and then easily merging them together. The git book has a great chapter on branching and merging. Check that out, and read through the links up top; their authors are all way better at using git than I am.