2013-10-09

Blender Experiment: Camera Tracking

After watching Oliver Villar Diz's excellent and detailed tutorial on camera tracking using Blender, I was motivated to try it out myself. So on a recent trip to Florida, I shot a short piece of video that I planned to use (containing some lousy, voice-over acting.) As with any learning experience, I ran into some difficulties and made many mistakes. But this still ended up being a fun project and, despite being cheesy, I'm happy how it turned out.




Image Sequences

The tutorial strongly recommends that the movie clip should be converted to a sequence of images. At first, I planned on skipping this step because I didn't want the hassle of converting to and from different formats. But I went ahead and followed the advice, which worked out well since image sequences allow one to easily tweak individual frames of the result. Fortunately for me, iMovie HD supports exporting a video clip to a sequence of .PNG files, so it was easy to do.

The downside was that it took some effort to find a Mac utility that converts the images back into a movie file. Although iMovie was able to create the sequences of images, for some strange reason it doesn't support importing them! I eventually found ffmpeg for the Mac, which works great.

Camera Tracking

The point of this project was applying what I learned from the camera tracking tutorial, so my first effort was to try to get the camera to track my video clip. I found adding tracking points to the clip is a very time consuming process. The fitter needs a lot of data to accurately place the camera, so you spend a lot of time picking features that are easily traceable and following their path through the clip. The Blender automates much of the process so each point doesn't take too long to track. But the sheer number of points needed to produce the camera's path adds up so it still takes a while.

So after spending an hour, or so, I was ready to let Blender calculate the camera path. And that's when I ran into a problem.

Blender supports two modes of camera tracking: motion-based and tripod-based. With motion tracking, the camera has to move far enough in the scene that the traces provide enough information to compute the perspective. In tripod mode, the camera stays in one place and the trace points provide rotational information for the camera.

In Oliver's tutorial, he pointed the camera in one direction while it moved the length of a basketball court. That motion provided a lot of trace points so Blender was able to figure out the proper perspective. The motion in my clip, however, wasn't conducive to either mode: I had minimal motion while pointing towards the ocean and then, when I turned 90 degrees to the right, the camera also moved approximately the depth of the balcony. With this motion, neither mode was able to find a solution with a low error. To get it to work, I threw away all the track points I set in the ocean frames and just used the track points on the house. This worked much better and Blender was able to give a decent solution that I could use.

3D Model

We vacationed in Port St. Joe, Florida, which happens to be near Tyndall Air Force Base. My brother had heard the area has a reputation for UFO sightings, so it didn't take long to decide what the video clip was going to show.

The 3D modeling in this video was simple. I started with a simple saucer shape, and gave it a glossy silver finish. This was all I was going to do for the model but, when I animated the scene, it didn't seem "alive" enough. So I added detail to the underside, using a dark curved surface with glowing cyan spheres - you know, typical alien equipment. The bottom carriage rotates 45 degrees every 9 frames. Adding that little extra motion made it much more interesting to watch.

Next, I placed it in the scene. I used the more "stable" points (i.e. the points when it looked like the cameraman was focusing on something) of the camera motion to set the keyframes. The UFO then smoothly travelled between the points.

The last detail that needed to be done with the renderer was the lighting. To do this, I added a "sun" light source and tried to set its angle to match the shadows in the video. Although I didn't set up the saucer to cast a shadow, the light source created an accurate bright reflection.

Composition Editor

Using the composition editor was the most enjoyable part of the project because it's easy to use, fun to work with, and it provides much of the "magic" that makes the final clip believable. With the editor, I was able to:

  • Adjust the contrast and colors of the rendered object to match the color, brightness, and saturation of the video clip. It would be very difficult selecting colors for the spaceship that would match the video's lighting, so I didn't even try. I chose some rough coloring and reflectivity for it and then used the composition editor to adjust it.

  • Using the "motion blur" node, I could re-render individual frames where the camera motion blurred the background more than the motion blur from the renderer. This really added to the realism of the scene!

  • Following Oliver's "green screen" tutorial, I set the sky as my key color (thank goodness it was a cloudless day!) I set up nodes to compute the mask and then merge the video with the rendered object.

Other Details

Even after creating the final scene, I found myself making tweaks here and there until I had to make myself stop.

  • I made the flying saucer start moving before the camera began panning across the house next door. This made it appear that the cameraman was reacting to the saucer's motion. I also made sure the saucer reappeared from behind the house a little after the camera stopped.

  • I added some "sci fi" sound effects from iMovie's library. When the saucer disappeared behind the house, I lowered the volume of the sound. When the saucer speeds off, I delayed the sound effect by a couple of frames to give the subtle hint of distance between it and the camera.

That's All!

Blender was intimidating when I first used it. But doing this little, throw-away project exposed me to more of Blender than I would have thought. I'm looking forward to learning more in future projects.

Thanks to BlendTuts for creating these tutorials and making Blender understandable!

2013-06-19

Parsing Erlang Terms in OCaml (part 2)

In this installment, we'll continue my example of parsing with OCaml by focusing on the parser.

As I was developing this program, I didn't first write the tokenizer and then the parser. It was an iterative process where some progress was made breaking the input into tokens, and then supporting it in the grammar. Other times, I defined more of the grammar and then tweaked the tokenizer to support it. Describing this in a few blog entries would get confusing if I kept going back and forth between the stages, so I'll simply focus on the parser for now and explain the tokenizer in the next article.

How to Invoke

The grammar is defined in a source file with a .yml suffix. Once compiled, an .ml file is generated. So, for instance, if my grammar specification is in a file called grammar.yml, I would compile it with:

ocamlyacc grammar.yml

This command generates grammar.ml, which holds the parser implementation. Amid all the various functions and tables used to recognize the language is an exported function which starts all the processing. This function takes two parameters: a function that breaks the input into tokens, and a source for the lexer. The function used for the first parameter will be created when we define the tokenizer. The second parameter is obtained from the Lexing module in the standard library.

Defining the Grammar

First we'll define tokens we expect to get from the input. The order in which the tokens are arranged determines the grammar of the language. Some tokens are fixed, like operators in an expression or keywords in a programming language. Some tokens represent values which vary, but have a consistent format. For instance, a string token is represented by text between two quotes and a number token is represented by 1 or more consecutive digits.

ocamlyacc has a section where the tokens are defined. If a token is fixed (i.e. punctuation), then the format is "%token NAME [NAME2 ...]". If the token represents a changing entity, then the syntax is extended to include the type of the value: "%token <type> NAME [NAME2 ...]"

For our Erlang term parser, we have several fixed tokens which represent the punctuation used; square brackets are used to mark where a list begins and ends, braces mark the beginning We can itemize all of them in one line:

%token OLIST OTUPLE CLIST CTUPLE COMMA PERIOD PIPE EOF

There are four primitive types that the tokenizer will recognize (the container types will be handled by the parser.) The four types read by the tokenizer are integers, floating point numbers, strings, and atoms. We associate these tokens with their expected types using the following three lines:

%token <int> INT
%token <float> FLOAT
%token <string> STRING ATOM

In the token section of the .yml file, we also get to specify the name of the function which begins the parsing and what its return value will be. In this case, we have a file containing a series of Erlang terms which we want to parse one term at a time. We'll have the function return an Erlang.t option so if a value is read, it will return Some v and at end of file it will return None.

%start next
%type <Erlang.t option> next

The next section of the source file defines the rules of the grammar. These rules are made up of the name of the rule, followed by the pattern of tokens that define the rule and then OCaml code to execute, if a match is found. The name of the rule will end up being the name of a function in the generated .ml file.

Our first rule will be our starting point, the function next. This function will return a parsed term or None if we reach the end of file. We know all top level terms are followed by a period, so we define the next rule as:

next: term PERIOD               { Some $1 }
| EOF                           { None }

The $1 in the OCaml code will be replaced with the value found in the first matched term. The second matched term is accessed with $2, but it's simply a period so it isn't very interesting. Typically we only take the values of tokens that have a type argument since the fixed tokens are used to build the context of the grammar.

Now we need to add a term rule so Erlang terms can be matched. The four primitive types are simple:

term:
| ATOM                          { Erlang.Atom $1 }
| STRING                        { Erlang.String $1 }
| INT                           { Erlang.Integer $1 }
| FLOAT                         { Erlang.Number $1 }

The four tokens we match against (ATOM, STRING, INT, and FLOAT) are tokens that are defined with a type argument. When the OCaml code refers to a token (the $1 in these examples) it wants the value that was associated with the token. Think of tokens with types as a variant type constructor that takes an argument. In fact, these four rules are transforming ocamlyacc "constructors" into our variant type's constructors.

We're not fully done with this rule, though. We also need to match Erlang tuples and lists:

| OLIST terms CLIST             { Erlang.List $2 }
| OLIST terms PIPE term CLIST   { Erlang.List (List.append $2 [$4]) }
| OTUPLE terms CTUPLE           { Erlang.Tuple $2 }

Even though I differentiate between Erlang tuples and lists, I stored the contents for both in an OCaml list. I didn't want to define an Erlang.Tuple2, Erlang.Tuple3, etc. for varying lengths of tuples. My OCaml knowledge isn't strong enough to know if there's a way to support different tuple sizes in a single variant type constructor, so I went for simplicity and saved the values in a list.

These rules match on the opening and closing tokens to determine whether it's a list or tuple. It relies on the terms rule to read in the values.

You may have also noticed that there are two rules to make a list. The Erlang designers, for some reason, allow building a list where the tail isn't another list! For instance: [ 3 | 9 ] is a list that is more like a 2-tuple. You can't pass this "list" to many of the standard library list functions without throwing an exception, so I'm not sure of the wisdom in allowing these lists to be built. Unfortunately, they can occur (there are some in the profiler output!) so we need to support them. The grammer I've defined will recognize degenerate lists and convert them into a normal OCaml list.

The last rule to define is the terms rule:

terms:                          { [] }
| term                          { [$1] }
| term COMMA terms              { ($1 :: $3) }

In the first line, there isn't a pattern, just OCaml code. This rule is an empty rule and it matches if the other rules don't. This rule will become a function named terms and it's use in previous rules show it needs to return a list of Erlang.t values. So if we don't match any pattern, we return the empty list.

What we're trying to do in the terms rule is find a comma separated list of terms. The second line finds a term that isn't followed by a comma, so it returns a one-item list. The third line finds a term followed by a comma and then recursively calls itself to parse the value after the list. The associated OCaml code builds up an OCaml list using the values of the two rules.

Done for now

The calculator examples you typically find in YACC tutorials are very useful because they show operator precedence along with doing slightly more processing in each rule (by computing a result.) This example shows a different use where we're trying to take text representing data and converting it into something usable by OCaml functions. My example, along with the traditional calculator example, will hopefully make ocamlyacc more accessible to programmers.

The next article will cover tokenizing the input.

Here is the full content of grammar.yml:

%{
%}

%token <int> INT
%token <float> FLOAT
%token <string> STRING ATOM
%token OLIST OTUPLE CLIST CTUPLE COMMA PERIOD PIPE EOF

%start next
%type <Erlang.t option> next

%%

next: term PERIOD               { Some $1 }
| EOF                           { None }

term:
| ATOM                          { Erlang.Atom $1 }
| STRING                        { Erlang.String $1 }
| INT                           { Erlang.Integer $1 }
| FLOAT                         { Erlang.Number $1 }
| OLIST terms CLIST             { Erlang.List $2 }
| OLIST terms PIPE term CLIST   { Erlang.List (List.append $2 [$4]) }
| OTUPLE terms CTUPLE           { Erlang.Tuple $2 }

terms:                          { [] }
| term                          { [$1] }
| term COMMA terms              { ($1 :: $3) }

2013-06-06

Parsing Erlang Terms in OCaml (Part 1)

I recently used the Erlang profiler to find the bottleneck in a project. Using the profiler was relatively easy but its output wasn't obvious in what it was trying to say (at least not at my experience level.) After trying to interpret the results, I realized I could to do some processing on the output to pull the information I needed. About the same time, I was investigating how to do parsing in OCaml using ocamllex and ocamlyacc. Although I was able to sort out the profiler information on my own, I still thought it would be interesting to try to parse the output using OCaml.

The Input

When saving performance statistics to a file, the profiler writes out the information using Erlang data terms. These could be primitive terms (i.e. integers, atoms) or they can be containers of terms. Each top-level term in the file ends with a period. For instance here's a file with three top-level terms:

[1,2,3].
this_is_an_atom.
[{ok, "this is a string"}, 5].

If you aren't familiar with Erlang, the first line is a list of three integers. The second line is a single atom with a long name. The third row is a little more complex: it's a list of two items. The first item is a tuple of two items; the atom ok and a string. The second item in the list is an integer.

These terms won't map directly into fundamental OCaml types for several reasons. First, there is no atom type in OCaml and, second, OCaml lists can only hold data of one type. Interestingly enough, we can model Erlang's loose typing using OCaml variant types. Another detail in our favor is that Erlang is a simple language in which you can't create new types, so representing all possibilities is easy.

The Output

We want the parser to read the terms from a file and represent them using OCaml values. To do this, we need to define how the terms will appear to an OCaml application. This, of course, is easily done by creating a module, Erlang, and defining the type t as:

type t =
  | Atom of string
  | String of string
  | Integer of int
  | Number of float
  | List of t list
  | Tuple of t list

The variant type, Erlang.t, can represent atoms, strings, and numbers. It can also represent containers like tuples and lists. OCaml type definitions are recursive by default, which means constructors of the type can take a value of the type of which they're a member. We see the List constructor takes, as an argument, an OCaml list of type Erlang.t, which can be any of these values, including other lists or tuples!

At this point, we should pause a moment to appreciate OCaml's elegance; in seven lines of OCaml, we can describe six Erlang data types! I have to admit, however, I made several simplifications and alterations:

  • Erlang integers are arbitrary precision, so the Integer constructor should use the Big_int module in the OCaml standard library.
  • We shouldn't have a string value since Erlang doesn't really have strings (it has lists of low-valued integers.) However, most of the uses I've seen have had list of integers used when the data was a list of integers and a string of text between quotes when the data was supposed to be human-readable. So our parser will take advantage of this.
  • Erlang also has process IDs and references, but they're typically represented by strings containing carefully formatted values, so we'll make them OCaml strings.

In practice, these limitations haven't been a problem.

So, knowing what our data types are going to be, the three lines in the above example file will get parsed and transformed into the following OCaml values:

Line 1: List [Integer 1; Integer 2; Integer 3]
line 2: Atom "this_is_an_atom"
line 3: List [Tuple [Atom "ok"; String "this is a string"];
              Integer 5]

Next...

In the next installment, I'll focus on the grammer used by ocamlyacc.

2013-05-22

C'mon, Rich!

I'm working on a three-part post, but haven't had time to proofread the text, nor thoroughly verify the code. Hopefully I'll get it completed and published soon...

2013-03-22

NetBSD on RPi: Minimizing Disk Writes

I recently installed NetBSD on my RaspberryPi. Although not all the hardware is fully supported, enough is there to make it a usable system. It's nice to have my RPi provide the same system experience (configuration, organization, etc.) as other NetBSD machines I maintain. A big "Thank you!" to the developers that made this possible.

One concern I have, however, is that the boot drive is an SD card. Solid state cards have limited write cycles and I worry that, since most Unix systems assume a mechanical drive which allow essentially infinite writes, my SD card will not last very long. To measure this, I monitored the disk writes using 'iostat' and was disappointed in how many writes were occurring on an otherwise idle system.

I enabled cron, syslogd, postfix, ntpd, and sshd on my system. This post shows how I greatly reduced writes to the SD with these services running. If you enable other services, you may have to make further adjustments.

To monitor writes (and to see if my tweaks were having any effects), I opened two windows with ssh sessions into the RPi. One window was used to make the modifications and the other was running 'iostat -w10 -x' so I could see the effects.

Toning Down Filesystem Updates

The first adjustment stops updating file access times. Anytime a command is run, the access time to its executable gets updated which writes to the SD card (actually, I think it's two writes, since it looks like the 'log' option -- i.e. journaling -- is enabled which means the journal log gets written to before committing the change to the filesystem.) This means each command that is run from the shell causes a write to the disk. If you run any shell scripts, the commands in it each cause a filesystem update!

To shut this option off, I changed this line in /etc/fstab:

/dev/ld0a   /  ffs     rw,log    1 1

to:

/dev/ld0a   /  ffs     rw,log,noatime,nodevmtime    1 1

You can also make this change without rebooting by typing (as root):

mount -uo noatime /

Use RAM Disks for Temporary File Objects

My next step moves directories, which have frequent changes, to RAM. The following entries are now in my /etc/fstab:

tmpfs     /tmp                    tmpfs   rw,-s32M
tmpfs     /var/run                tmpfs   rw,union,-s1M
tmpfs     /var/mail               tmpfs   rw,union,-s10M
tmpfs     /var/spool/postfix      tmpfs   rw,union,-s20M
tmpfs     /var/db/postfix         tmpfs   rw,union,-s1M
tmpfs     /var/chroot             tmpfs   rw,union,-s10M

I used the 'union' option so mounting the RAM disks won't hide the underlying directory structure. You may also note I didn't include /var/log in the list. That's because I changed /etc/syslogd.conf to send all syslog messages to a Synology NAS (Google Affiliate Ad) that I use, so my /var/log doesn't get written to. If you don't forward your syslog messages, then you'll want to slap a tmpfs on top of it, as well.

It would be nice to put all of /var into a RAM drive, but NetBSD uses /var for longer lasting information, too (e.g. the pkgsrc database.) For now, I'll use the five mount points.

Further Changes for Postfix

The changes up to this point greatly reduced the writes which were occurring on an idle system. However there was one more 8k write which was happening once a minute. It turns out that the 'pickup' process in the postfix suite wakes up once a minute and writes to a FIFO. Apparently, writing to a FIFO updates its modify time.(?) Editing the 'pickup' line in /etc/postfix/master.cf so that 'fifo' is changed to 'unix' fixes this last problem.

Now, if I let the system alone for a few hours, I can scroll back through the iostat output and see no writes (or very, very few) are occurring. This will greatly extend the life of the SD card.

Hope this helps others!

Blender Experiment: Camera Tracking

After watching Oliver Villar Diz's excellent and detailed tutorial on camera tracking using Blender , I was motivated to try it out mys...