Maintaining multiple Haxe versions – Version 2.

This is an update to my previous post on this subject.

I now use a script to switch between different versions of Haxe compilers instead of editing the ~/.profile.
So here are the steps

The Haxe binaries are here: http://haxe.org/file/

mkdir ~/bin
cd ~/bin
wget http://haxe.org/file/haxe-2.08-osx.tar.gz
wget http://haxe.org/file/haxe-2.09-osx.tar.gz
wget http://haxe.org/file/haxe-2.10-osx.tar.gz
tar -xfvz haxe-2.08-osx.tar.gz
tar -xfvz haxe-2.09-osx.tar.gz
tar -xfvz haxe-2.10-osx.tar.gz

now make the place holder for the “current” Haxe version.

mkdir ~/bin/haxe

Next edit ~/.profile and create the necessary environment vars.

export HAXEPATH=~/bin/haxe
export PATH=$HAXEPATH:$PATH

export HAXE_LIBRARY_PATH=$HAXEPATH/std:.
export PATH=$HAXE_LIBRARY_PATH:$PATH

next create this script hxlink.sh

version=$1
base=~/bin
rm "$base"/haxe
ln -s "$base"/haxe-"$version"-osx "$base"/haxe

Save the hxlink.sh script wherever you like. The script is basically linking a specific version of haxe to the HAXEPATH defined in the ~/.profile

You can also add an alias for the hxlink.sh to your ~/.profile that will allow you to run it from anywhere by just typing ‘hxlink’

alias hxlink=’~/scripts/hxlink.sh’

Now you can simply run

hxlink 2.08

And running haxe should now show that haxe is running version 2.08

haxe
haXe Compiler 2.08 – (c)2005-2011 Motion-Twin
Usage : haxe -main [-swf|-js|-neko|-php|-cpp|-as3] [options]
Options :
-cp : add a directory to find source files
-js : compile code to JavaScript file
-swf : compile code to Flash SWF file
-as3 : generate AS3 code into target directory
-neko : compile code to Neko Binary
-php : generate PHP code into target directory
-cpp : generate C++ code into target directory
-xml : generate XML types description
-main : select startup class
-lib : use a haxelib library
-D : define a conditional compilation flag
-v : turn on verbose mode
-debug : add debug informations to the compiled code
-help Display this list of options
–help Display this list of options

A couple of Dark corners in Haxe

Every language has its “Dark Corners”, the odd little gotchas or head scratchers that can soak up a bit of time trying to understand and work around.
Haxe is no different.

Here are a couple that I’ve encountered over the last couple of days….

Strange import issue.

package org.pixelami.binding;
import org.pixelami.binding.BindingBuilder;
@:autoBuild(org.pixelami.binding.BindingBuilder.build())
interface IBindableModel {}

Produces this error …

haxe.macro.#Context has no field getLocalClass

Removing the import compiles without incident.

package org.pixelami.binding;

@:autoBuild(org.pixelami.binding.BindingBuilder.build())
interface IBindableModel {}

I’m not even going to spend any time trying to understand that one … just safely file it under “dark corners”.

The next one I believe is very tricky for Haxe newcomers and for this reason I would like to see the language try to reduce the room for ambiguity.

Basically this next “dark corner” is an issue caused by the compiler getting confused about inferring the correct type. Hardly a “big” issue one might think, but when mixed with Haxe’s public inner types – things can get pretty tricky pretty quickly, and the solutions are not always obvious.

Here was my baptism of fire. (in pseudo code)

import haxe.macro.Type;
import haxe.macro.Expr;
import haxe.macro.Context;
.....
var t:ComplexType = TPath({ name : "String", pack : [], params : [], sub : null });
var kind = FVar(t, expr);

Produces this ominous error…

... line 6 : haxe.macro.ComplexType should be haxe.macro.VarAccess
... line 6 : For function argument 'read'

Basically the compiler didn’t like they way I was constructing FVar.
Now luckily I had been working with the ClassField typedef the day before and had noticed that ClassField has a kind field that expects a FieldKind enum that actually defines a different FVar enum.

FVar(read:VarAccess, write:VarAccess).

The FVar that I wanted to be using was the FVar defined in FieldType with the signature

FVar(t:ComplexType,?e:Expr)

Luckily – because I was aware of this other FVar the error message I was seeing did make some sense, it seemed the compiler was incorrectly inferring the type of FVar.
If I hadn’t known about the other FVar – well – I’m thinking I would have been scratching my head, Googling and finally composing an email to the mailing list.

Ok, I thought, I’ll give the compiler a helping hand … I tried…

import haxe.macro.Type;
import haxe.macro.Expr;
import haxe.macro.Context;
.....
var t:ComplexType = TPath({ name : "String", pack : [], params : [], sub : null });
var kind:FieldType = FVar(t, expr);

But this didn’t work…. inference seemingly thwarted by the imports ?

Ok, taking another look … I could see the problem was that I had these two imports…

import haxe.macro.Type;
import haxe.macro.Expr;

And both these Types define ‘conflicting’ inner FVar definitions… hmmm… Haxe – “t’es coquin quoi?”.
If you are going to allow this kind of inference then inner types should have to be explicitly referenced via their full inner type path (no ?).

Ok, let’s fix this PITA. Once you know what the problem is the solution is to give the compiler the explicit hint that it needs.

import haxe.macro.Type;
import haxe.macro.Expr;
import haxe.macro.Context;
.....
var t:ComplexType = TPath({ name : "String", pack : [], params : [], sub : null });
var kind:FieldType = FieldType.FVar(t, expr);

Or, of course…
You could just change the order of the imports … hmmm.

import haxe.macro.Expr;
import haxe.macro.Type;
import haxe.macro.Context;
.....
var t:ComplexType = TPath({ name : "String", pack : [], params : [], sub : null });
var kind:FieldType = FVar(t, expr);

Now as I said in the intro – all Languages have their dark corners, but let’s think about that for a moment.

When I first saw Enum fields just hanging loose inside Haxe code with absolutely no clue as to where they came from, well – let’s say – I was a little concerned. Not least because it was bloody impossible to tell where they were declared, secondly I could see that potential collisions were going to be easy and that this was going to lead to time being wasted looking for the source of the collisions.

To digress slightly here … all time wasted on things like this means less time delivering features. In a commercial team this can lead to less product features delivered which can lead to a loss of revenue which can lead to having to cut the dev team which can lead to little Timmy not getting his new pair of shoes for Xmas because Daddy/Mummy lost their job because a critical release failed due to troubleshooting weird import issues in a Haxe project.

In short Haxe’s lack of strictness in this area could cost little Timmy his new shoes – and so I say – “Please, for the sake of little Timmy’s feet – fix this !”

To my mind – if Haxe is going to have public inner types (and that’s another debate) then the compiler should probably enforce the use of their full inner type namespace.

e.g. Expr.FieldType.Fvar , Type.FieldKind.FVar.

Ok, it’s more characters to type – but it removes ambiguity – and that has to be the overriding priority when designing robustness. Remember – little Timmy’s feet are always growing and if his shoes become too tight – well – this can lead to problems in later life.

Another alternative could be that the compiler simply throws and error when it detects a potential type ambiguity due to name collision of inner types… and then devs just get into the habit of being more explicit when referencing them.

Actionscript to Haxe

I have been using Haxe for a little while now and while the syntax is very similar to Actionscript 3 – there are some notable differences.

Here’s my list:

Type Comaprison
Actionscript 3:

if(obj is MyType)...

Haxe:

if(Std.is(obj, MyType))...
Dynamicly Call Method
Actionscript 3:

method.apply(scope,args);

Haxe:

Reflect.callMethod(obj, Reflect.field(obj,methodName), args);
Type Cast
Actionscript 3:

var myType:MyType = MyType(obj);

Haxe:

var myType:MyType = cast(obj, MyType);
Dynamic Type
Actionscript 3:

dynamic public class MyType { ...

Haxe:

class MyType implements Dynamic {...
Dynamic Object
Actionscript 3:

var obj:Object = {};

Haxe:

var obj:Dynamic = {};
Number / Float
Actionscript 3:

var x:Number = 1.6;

Haxe:

var x:Float = 1.6;
isNaN
Actionscript 3:

if(isNaN(x)) ...

Haxe:

if(Math.isNaN(x)) ...
Getter Setter
Actionscript 3:

private var _selectedIndex:Int;

public function get selectedIndex():Int
{
    return _selectedIndex;
}

public function set selectedIndex(value:Int):Int
{
    _selectedIndex = value;
}

Haxe:

private var _selectedIndex:Int;

public var selectedIndex(get_selectedIndex, set_selectedIndex):Int

private function get_selectedIndex():Int
{
    return _selectedIndex;
}

private function set_selectedIndex(value:Int):Int
{
    return _selectedIndex = value;
}
Getter Read Only
Actionscript 3:

private var _selectedIndex:Int;

public function get selectedIndex():Int
{
    return _selectedIndex;
}

Haxe:

private var _selectedIndex:Int;

public var selectedIndex(get_selectedIndex, null):Int

private function get_selectedIndex():Int
{
    return _selectedIndex;
}

or shorter version if you don’t need to keep a private value

public var selectedIndex(default, null):Int

read more about Haxe properties here

EDIT:
there are more good examples of Actionscript 3 vs Haxe here

uvccapture on Raspberry Pi (Debian Squeeze)

Here is how I got uvccapture running and taking snaps from a Logitech webcam. (Logitech, Inc. QuickCam Pro 9000)

Firstly, read and follow the Raspberry Pi Debian Squeeze firmware update process. It is required in order to enable v4l2 in the kernel.

Once you have the updated Raspberry Pi firmware you are ready to do

$ sudo apt-get install uvccapture

To take a snapshot you can run

$ uvccapture -S80 -B80 -C80 -G80 -x800 -y600

if you see this error:

ERROR opening V4L interface
: Permission denied

You have a couple of options . .
use sudo. Or the better solution is to add the pi user to the video group – which will give the pi user permission to use the webcam.

$ sudo usermod -a -G video pi

check that the user is now in the video group

$ id pi
uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),29(audio),44(video),46(plugdev),100(users),111(lpadmin),119(admin),122(sambashare),136(vchiq),257(powerdev)

Then substitute user to pi – this forces a reload of the new group permissions – which is required before running uvccapture without sudo.

$ su pi

uvccapture did require a bit of tweaking to get decent quality photos.

uvccapture --help

shows all available options.

Apart from that uvccapture is nice and fast. Approx 0.3-2 seconds to take 800 x 600 image.

Raspberry Pi firmware update for Debian squeeze

I have been experimenting with the RaspberryPi. In particular I wanted to get a webcam working with the Debian “squeeze” distro available on the Raspberry Pi downloads page.

The original Debian squeeze img contains a kernel that does not have v4l2 configured. However the latest firmware for the Raspberry Pi does include a recompiled kernel with v4l2 support added.

There is a tool for updating the Raspberry Pi firmware – however a bit of cajoling was required to get it to work.

$ sudo apt-get install ca-certificates
$ sudo apt-get install git-core
$ sudo wget http://goo.gl/1BOfJ -O /usr/bin/rpi-update
$ sudo chmod +x /usr/bin/rpi-update
$ sudo rpi-update
Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS
Performing self-update
Autodetecting memory split
Using ARM/GPU memory split of 192MB/64MB
We're running for the first time
Setting up firmware (this will take a few minutes)
Using SoftFP libraries
/opt/vc/sbin/vcfiled: error while loading shared libraries: libvchiq_arm.so: cannot open shared object file: No such file or directory
If no errors appeared, your firmware was successfully setup
A reboot is needed to activate the new firmware

The first time I tried this I ignored the error and rebooted only to get a black screen and the Raspberry Pi’s “red light of fail”.

So I set up a fresh image and started again.
This time I added a couple of extra steps

$ sudo rpi-update
Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS
Performing self-update
Autodetecting memory split
Using ARM/GPU memory split of 192MB/64MB
We're running for the first time
Setting up firmware (this will take a few minutes)
Using SoftFP libraries
/opt/vc/sbin/vcfiled: error while loading shared libraries: libvchiq_arm.so: cannot open shared object file: No such file or directory
If no errors appeared, your firmware was successfully setup
A reboot is needed to activate the new firmware
$ sudo ldconfig
$ sudo rpi-update
Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS
Performing self-update
Autodetecting memory split
Using ARM/GPU memory split of 192MB/64MB
Updating firmware (this will take a few minutes)
Your firmware is already up to date
$ sudo reboot

By running

$ sudo ldconfig
$ sudo rpi-update

After the first failure, the rpi-update completes and reboot is successful.

On reboot running

$ ls /dev/

shows that /dev/video0 is now available.

Maintaining multiple Haxe versions

UPDATE: There is a newer post on this subject with my current preferred method of switching between different installations of the compiler

If you are working on multiple projects you may find that you need to be able to compile different projects with different versions of the Haxe compiler.

After a little trial and error, here’s how I got it set up on OS X…

The Haxe binaries are here: http://haxe.org/file/

mkdir ~/bin
cd ~/bin
wget http://haxe.org/file/haxe-2.08-osx.tar.gz
wget http://haxe.org/file/haxe-2.09-osx.tar.gz
tar -xfvz haxe-2.08-osx.tar.gz
tar -xfvz haxe-2.09-osx.tar.gz

Next I edited my ~/.profile and created the necessary environment vars.

export HAXEPATH=~/bin/haxe-2.09-osx
#export HAXEPATH=~/bin/haxe-2.08-osx
export PATH=$HAXEPATH:$PATH

export HAXE_LIBRARY_PATH=$HAXEPATH/std:.
export PATH=$HAXE_LIBRARY_PATH:$PATH

I also found that in my case it was necessary to change the default location of the haxelib repository after getting some odd behaviour from haxelib.
I think this was because I had manually deleted the default installation in /usr/lib/haxe , but then re-copied just the haxe libs back to /usr/lib/haxe/lib.
Anyway after changing the haxelib directory to somewhere in my user space the errors went away.

This is how to reconfigure the haxelib repository folder.

haxelib setup ~/dev/lib/haxe

So now if I need to compile with 2.08 I can edit my ~/.profile and uncomment

#export HAXEPATH=~/bin/haxe-2.09-osx
export HAXEPATH=~/bin/haxe-2.08-osx

(After editing ~/.profile you need to reload it with).

source ~/.profile

It would be nice if haxelib had a feature for managing multiple compiler versions just like libs.

Haxe unit testing with munit

We all love TDD – right ?

So you’re starting a Haxe project and you need to write some tests while you develop. Well, thanks to massiveinteractive there exists a very good unit testing library that works pretty much like flexunit.
It’s called munit and can be found here

Here’s my quick start guide:

For starters we want a top level project structure that looks something like this:
project structure
inside src we would place our package folder structure, inside test we would place the test package structure mirroring our src package structure.
expanded project structure

Now at a terminal we want to cd to our project folder and install munit if we don’t already have it.

$ haxelib install munit

Once installed we are ready to configure munit for our project.

$ haxelib run munit config

At which point we will enter an interactive configuration session that allows us to configure various folder locations…

Massive Unit - Copyright 2012 Massive Interactive. Version 0.9.2.3
Configure munit project settings
--------------------
test src dir (defaults to 'test') :
output build dir (defaults to 'build') :
report dir (defaults to 'report') :
target class paths (comma delimitered, defaults to 'src') : src
hxml file (defaults to 'test.hxml') :
resources dir (optional, defaults to 'null') : resources
templates dir (optional, defaults to 'null') :

I have used defaults for all but two of the settings: I have explictly defined the src folder, and also specified a resources folder. If your test folder or output locations are different then this is where you would configure them.

Once that is done you will find a .munit file has been created inside the top level project folder (where you ran haxelib run munit config).
The contents of the .munit file are pretty straight forward:

version=0.9.2.3
src=test
bin=build
report=report
hxml=test.hxml
classPaths=src
resources=resources

At this point you are ready to generate all the test files and the test.hxml that can be used to build and run the tests.

$ haxelib run munit gen

When we run this command our test folder will be scanned for all our test cases and the files required to run them are generated.

If we take a look at our project structure now we’ll see something like this
post test creation project structure

Note that 4 files have been generated for us: ExampleTest.hx, TestMain.hx, TestSuite.hx and test.hxml.
ExampleTest.hx can be deleted – but it serves as a useful quick reference for the correct metadata to use to when declaring test methods.

Here’s the source of ExampleTest.hx

/**
* Auto generated ExampleTest for MassiveUnit.
* This is an example test class can be used as a template for writing normal and async tests
* Refer to munit command line tool for more information (haxelib run munit)
*/
class ExampleTest
{
	private var timer:Timer;

	public function new()
	{

	}

	@BeforeClass
	public function beforeClass():Void
	{
	}

	@AfterClass
	public function afterClass():Void
	{
	}

	@Before
	public function setup():Void
	{
	}

	@After
	public function tearDown():Void
	{
	}

	@Test
	public function testExample():Void
	{
		Assert.isTrue(true);
	}

	@AsyncTest
	public function testAsyncExample(factory:AsyncFactory):Void
	{
		var handler:Dynamic = factory.createHandler(this, onTestAsyncExampleComplete, 300);
		timer = Timer.delay(handler, 200);
	}

	private function onTestAsyncExampleComplete():Void
	{
		Assert.isFalse(false);
	}

	/**
	* test that only runs when compiled with the -D testDebug flag
	*/
	@TestDebug
	public function testExampleThatOnlyRunsWithDebugFlag():Void
	{
		Assert.isTrue(true);
	}

}

We are now ready to run our tests with this command

haxelib run munit test test.hxml

And that’s all for now. (I did say this was quick start guide.)

Howto add user on archlinux

A very quick cheat sheet for getting a user setup quickly on archlinux-arm

Create a user called ‘myuser’ (obviously replace this with your desired username)

# useradd -m -g users -G \
audio,lp,optical,storage,video,wheel,games,power,scanner \
-s /bin/bash myuser

Set the password

# passwd myuser
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Add to sudo users (Optional)

# visudo

If visudo does not exist then install with

# pacman -S sudo

Once vi launches use arrow key to scroll to the line:

# %wheel	ALL=(ALL) ALL

press ‘x’ twice to uncomment the line. Then exit vi (press ESC then “:” then “wq” then press ENTER).
Now change to your newly created user account

# su myuser

Now edit the .bashrc to get TAB completion working for the new user when using sudo.

$ nano ~/.bashrc

And add the following line at the bottom.

complete -cf sudo

Then do

$ source ~/.bashrc

to reload .bashrc

Neko on Archlinux-arm (ARMv5)

Today I decided to try getting Neko VM running on my Pogoplug (Pink)
Pogoplug (ARMv5)
Here’s how it worked…

Firstly I had to install development packages.

# pacman -S kernel-headers file base-devel abs

But that failed with some 404 errors:
failed retrieving file ‘xxx-arm.pkg.tar.xz’ from mirror.archlinuxarm.org : The requested URL returned error: 404

So I ran

# pacman -Syu

which synced everything after which I could proceed with installation of development packages

# pacman -S kernel-headers file base-devel abs

Once installed I had to grab the PKGBUILD files for Neko from here (The wget command below takes care of this)

You may also need to install subversion if it is not already install on your system

# pacman -S subversion

We’re now ready to build the Neko package.
From the Pogoplug terminal as non root user:

$ wget http://aur.archlinux.org/packages/ne/neko/neko.tar.gz
$ tar -xzvf neko.tar.gz
$ cd neko
$ sudo makepkg --asroot -Acs

The build takes quite some time and grabs any dependencies that it requires.
Once completed it generates a package called neko-1.8.2-7-arm.pkg.tar.xz package which can be installed doing

$ sudo pacman -U neko-1.8.2-7-arm.pkg.tar.xz

Once installed you can run Haxe code on a Pogoplug if you target Neko :-)
This bodes well for running Neko on RaspberryPi too, since there already exists an ArchlinuxArm distro for the RaspberryPi.

As a side experiment I wanted to see if Neko would build using the yaourt package management system.

$ sudo pacman -S yaourt
$ yaourt -AS neko

However this failed with a not very helpful error.

/bin/sh: line 1: 21293 Killed                  LD_LIBRARY_PATH=../bin: NEKOPATH=../boot:../bin ../bin/neko nekoml -nostd neko/Main.nml nekoml/Main.nml
make: *** [compiler] Error 137
==> ERROR: A failure occurred in build().
    Aborting...
==> ERROR: Makepkg was unable to build neko.

Anyway the first method did work, and maybe someone with more knowledge than myself might be able to get it working with yaourt.

Useful Links:
http://archlinuxarm.org/
http://archlinuxarm.org/platforms/armv5/pogoplug-v2-pinkgray
http://archlinuxarm.org/developers/building-packages
http://nekovm.org/
http://haxe.org/doc/targets/neko

Mount drives using serial number to name mapping with udev

I recently installed Arch Linux Arm v5 on my Pogoplug v2 and I wanted to configure the box to always mount the usb drives at the same mount points.

The solution was to create a udev rule and place it in /etc/udev/rules.d

here is the rule: /etc/udev/rules.d/15-usb-ext.rules

# This section defines the mapping of serial numbers to names
# To find serial numbers as seen by udev use udevadm
# e.g. udevadm info -a -n /dev/sdb

## WD 1
DRIVERS=="usb", ATTRS{serial}=="57442D57434153xxxxxxxxxx", NAME="WD-2TB-1"

## WD 2
DRIVERS=="usb", ATTRS{serial}=="574D415A413132xxxxxx", NAME="WD-2TB-2"

## WD 3
DRIVERS=="usb", ATTRS{serial}=="57442D57434155xxxxxxxxxxxxxx", NAME="WD-1TB"

# Start at sdb to avoid system harddrive.
KERNEL!="sd[b-z][0-9]", GOTO="media_by_label_auto_mount_end"

# Import FS infos
IMPORT{program}="/sbin/blkid -o udev -p %N"

#If we have picked up a name (defined above) then use it
NAME!= "", ENV{dir_name}="$name"
# otherwise use the kernel name (e.g. "sdb1" , "sdc1" , etc )
NAME=="", ENV{dir_name}="mnt-%k"
# Global mount options
ACTION=="add", ENV{mount_options}="relatime"
# Filesystem-specific mount options
ACTION=="add", ENV{ID_FS_TYPE}=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"

# Mount the device
ACTION=="add", RUN+="/bin/mkdir -p /media/%E{dir_name}", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}"

# Clean up after removal
ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/umount -l /media/%E{dir_name}", RUN+="/bin/rmdir /media/%E{dir_name}"

# Exit
LABEL="media_by_label_auto_mount_end"

I don’t really want to get into the details of writing udev rules, (this resource helped decipher how udev rules work.)
but here is a very brief outline of what’s going on here.

DRIVERS=="usb", ATTRS{serial}=="57442D57434153xxxxxxxxxx", NAME="WD-2TB-1"

The rule checks that the newly detected device is usb and has serial numer “57442D57434153xxxxxxxxxx”, if the rule matches then the name “WD-2TB-1″ is assigned to the device.
If no matching serial is found the rule defaults to using the kernel name of the device

NAME=="", ENV{dir_name}="mnt-%k"

man udev to find out all about %k , %n and $name (http://linux.die.net/man/8/udev)

It’s important to note that after saving a rule, the rule is applied the next time a device is picked up by the system. Unplugging and then replugging the drives will cause them to be detected and the udev rule to be applied.

At this point I could edit my /etc/samba/smb.conf and add shares for my drives that will now always be found at the same mount points

[extra]
  path = /media/WD-2TB-1/
  read only = no
  public = yes
  writable = yes
  force user = root

[media]
  path = /media/WD-2TB-2/
  read only = no
  public = yes
  writable = yes
  force user = root

[bkup]
  path = /media/WD-1TB/
  read only = no
  public = yes
  writable = yes
  force user = root

Then restart samba with

rc.d restart samba

Bon appetit.