川合史朗:Paper:Scheme

川合史朗:Paper:Scheme



Gaucheの開発戦略--小規模プロジェクトこそ国際化を考えよう (特集 世界に飛び出す日本のソフトウェア) (2011)

情報処理学会?デジタルプラクティス

小規模で開発リソースも少ないオープンソースプロジェクトにとって,日本国外への展開は「もっとユーザや開発リソースが増えてから」という,将来のステップのように感じられるかもしれない.しかし,ニッチなターゲットを相手とするソフトウェアならば,対象を国内に限ってしまうことはただでさえ少ないユーザをさらに限定することになる.むしろ最初から国際的に展開しておいた方がプロジェクトの持続に必要なユーザを集めやすい.本稿ではオープンソースのScheme処理系Gaucheの10年間にわたる開発経験から,少ないリソースで国内外にユーザを得る,維持可能な戦略について論じる.

Multibyte character string processing in Scheme

International Lisp Conference 2003での発表論文。

Efficient flonum handling on a stack-based VM for dynamically typed languages (2008)

第10回プログラミングおよびプログラミング言語ワークショップ PPL2008

Efficient floating-point number handling for dynamically typed scripting languages(2008)

the 2008 symposium on Dynamic languages

Typical implementations of dynamically typed languages treat floating-point numbers, or flonums, in a "boxed" form, since those numbers don't fit in a natural machine word if a few bits in the word are reserved for type tags. The naïve implementations allocate every instance of flonums in the heap, thus incur large overhead on numerically intensive computations. Compile-time type inference could eliminate boxing of some flonums, but it would be costly for highly dynamic scripting languages, in which a compiler runs every time a script is executed.

We suggest two modified stack machine architectures that avoid heap allocations for most intermediate flonums, and can be relatively easily retrofitted to existing stack-based VMs. The basic idea is to have an arena for intermediate flonums that works as a part of extended stack or as a nursery. Like typical VMs, flonums are tagged pointers that point to native floating-point numbers, but when a new flonum is pushed onto the VM's stack, it actually points to a native floating-point number placed in the arena. Heap allocation only occurs when the flonum pointer needs to be moved to the heap. The two architectures differ in the strategies to manage the arena.

We implemented and evaluated those strategies in a Scheme implementation"Gauche." Both strategies showed 30%-140% speed up in numerical computation intensive benchmarks, eliminating 99.8% of heap-allocation of intermediate flonums, with little penalty in non-numerical benchmarks. Profiling showed the speed improvement came from the elimination of flonum allocation and garbage collection.

More ...