On behalf of the Parrot team, I'm proud to announce Parrot 6.7.0, also known
as "Grey-headed Lovebird". Parrot (http://parrot.org/) is a virtual
machine aimed
at running all dynamic languages.

Parrot 6.7.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/devel/6.7.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.7.0 News:
- Core
+ find_codepoint: added more name aliases for perl6 (LF,FF,CR and NEL)
+ Optimize internal pcc CallContext calls from VTABLE to direct calls
and inline CallContext ATTR accessors to omit the obj check.
[GH #1083]
- Documentation
+ Updated documentation for StringHandle.readall and
FileHandle.readall, which
reads just the rest of the buffer if tell > 0. [GH #1084]
- Tests
+ Improve test plan for t/library/pg.t
- Community
+ Our GSoC student Chirag Agrawal successfully passed the
final evaluation.
All three project parts have been merged already.


The SHA256 message digests for the downloadable tarballs are:
04b0ee976c61100af993f8830863ccfee9eada6bf2b9c224850653d470dc9da2
parrot-6.7.0.tar.gz
a8d62af7cc93c39343311337acc5d7771107e99689a8df160bc8b3acc3ba99eb
parrot-6.7.0.tar.bz2

Many thanks to all our contributors esp. our GSOC student Chirag
Agrawal for making this possible,
and our sponsors, esp. cPanel for supporting this project. Our next
scheduled release is at 16 Sep 2014.

Enjoy!
--
Reini Urban
http://parrot.org/ http://cpanel.net/ http://www.perl-compiler.org/

Perl 6 Announce | perl.perl6.announce | 2014-08-19 16:13:32

It's wrap-up time! This will be my final post in the GSoC 2014 program, because the GSoC 2014 progam ends 19:00 UTC today, so that's early enough that I won't be writing another post today. It's hard enough to write a post every week as it is. So what I'd like to do today is wrap-up what we've reached and what I've learned in the past three months.

Let's start with a summary of what has been reached. MoarVM has an optional JIT compiler based on spesh, which is the optimization framework I've mentioned often by now. The JIT compiled code cannot handle all instruction yet, but it can handle loops, invocations, perl6 container ops, some regular expressions, expressions and deoptimisation. Reported benefits are anywhere between nothing and 50% faster code, which is awesome. I assume very few people have a lot of truly computation-intensive code running on rakudo perl6 yet, but I suspect that is where it would help most.

The code-generator that we currently have is not awesome. It is in fact pretty naive. Values that are computed in a CPU register must be stored in the MoarVM register space before we can use it in another instruction. We make no attempt to detect loops and allocate registers, we don't do common sub-expression elimination, nor do we make any attempt to dynamically select different instructions.

Well, one aspect of it is awesome, and that is that by using DynASM, there is a direct correspondence between the code we write and the code that is executed. I cannot over-emphasize how user-friendly this really is. Writing a compiler as if I'm writing regular x64 assembly is a great benefit and allows for much faster development times.

Anyway, this discussion of the code generator brings me to another topic I'd like to discuss, and that's why we wrote our own code generator rather than borrowing the one from LuaJIT (or LLVM, or v8 for that matter). I think this is a very fair question and so it deserves some discussion. The first point I'd like to call out is the idea that LLVM (or any of the alternatives) are magic black boxes in which you enter intermediate-level code, wait, and out rolls totally optimal machine code that will run faster than C. I'm obviously totally biased but based on my limited experience I'd say that's not how this works.

First of all, code generation is hardly the most difficult part of doing a JIT compiler. By far most of the complexity in this project has come from integration with the existing MoarVM infrastructure, and that's where the most complex bugs have come from, too. (I've written at length about such bugs already). For example, MoarVM call frame mechanics involve a lot of bookkeeping that simply doesn't exist for C call frames. Likewise, the JIT compiler must make objects available and understandable for the garbage collector just as the interpreter does (or change the GC to understand how to read a JITted frame, not an easy task by itself). Exception throwing and handling form a similar challenge, one which I've talked about at length in another post.

Second, using another projects' code generator comes with specific costs of its' own. A great example of this is the FLTJIT that has recently been added to JavascriptCore (the javascript interpreter that is embedded in webkit). As described here, the JavascriptCore team essentially use LLVM as a plug-in code generation backend after converting their intermediate representation into LLVM IR. And to be fair to them, this seems cause quite a speedup. However this also comes at a cost. For one thing, LLVM optimization and compilation is sufficiently slow for it to be moved into a separate thread (so that it doesn't slow down execution of the unoptimised code). For another example, the garbage collector used by JavascriptCore is semi-conservative (i.e. the stack is scanned conservatively, whereas heap objects are scanned precisely), and to avoid the retention of lots of dead objects unused portions of the stack have to be zeroed explicitly. For a third example, they apparently had to go great lengths to deal with on-stack-replacement, something we handle more simply. In short, using LLVM is costly.


As for using the LuaJIT or v8, I'd argue that there is no truly language-independent compiler. Both v8 and LuaJIT heavily used properties of their respective languages to optimise code, properties which do not hold for perl6. And similarly these languages may have quirks or unusual properties which perl6 does not. An example of these is the truth-value of NaN values. For perl6 these are true because NaN is not equal to 0.0, and for javascript they are false.

I would also note that impressive as the results of the aforementioned projects may be, what they do isn't magic. Algorithms to do code generation are really quite well-known and researched, and there is no reason why we couldn't - in time - implement them. More important than code generation however seems to be optimizing memory access, as it is my suspicion that this is where most of the time is actually spent (Timotimo is actually looking into this as I'm writing this). Because none of these backends know anything about perl6, none of them can use any of the special properties of perl6 to optimize code, which we can (and do).

In conclusion, the choice of a code generation backend is a trade-off like any other. There is no magical silver bullet that will make your code run fast. You could argue against writing our own (even using the help of DynASM) and such an argument could be fair. I would argue as I have above that the trade-off was such that using DynASM was right, especially given the ease of use that it provides.

So what are the next steps for the MoarVM JIT? The first thing is to keep adding instructions while trying to remain bug-free. This isn't as easy as it looks because it seems like every third instruction we add enables the execution of poorly-tested branches that contain new bugs :-). After that, I'd still like to add full register selection for x64 by modifying DynASM. This won't be a very simple task for me since I know very little of x64 instruction encoding or of lua. With that, we can add proper instruction selection and low-level optimization, which should result in a speed-up. However, the most important work is probably to be done in spesh by transforming polymorphic code in simpler, monomorphic, efficient code. For instance, the instruction to test the truth-value of an iterator could be transformed into an iterator-specific instruction that would be much simpler, possibly eliminating a function call. Such transformations should also benefit users who do not use the JIT compiler yet. It seems to me spesh is the place where MoarVM can gain most.

Bart Wiegmans | brrt to the future | 2014-08-19 00:27:43

Last week I got distracted by an ear infection and productivity plummeted, and when I recovered near the end of the week I didn’t want to write the blog post for monday and ignore everything that happened since, nor did I want to write a blog post on friday and have almost nothing to write about the next monday!

So now I’ll give you a blog post covering two weeks instead of just one. Here we go!

  • Mouq has started implementing the ** (called the HyperWhatever), which is like the Whatever Star, except for lists of values rather than individual values (the regular Whatever Star works like this: @names.map: *.uc)
  • Mouq, with input from TimToady, has also started working on making Lists of Lists work to spec, which is an especially nice thing to have in array slices. It means you can have a two-dimensional array and get the second entry of each inner array like @twodim[*;1].
  • Last monday was the “soft pencils down” date for the GSoC. At that day, the moar-jit branch got merged into master, as it’s rather stable now and Configure.pl will build a jit-less MoarVM by default still (use –enable-jit to get a jit-enabled build instead).
  • A whole bunch more ops have been implemented for jitting. Mostly by brrt and Jnthn, but I added a few myself, too. I don’t recall when exactly it happened, but On-Stack-Replacement interacts properly with the JIT now and the JIT can handle extops, too. Handlers, which used to trip up spesh, are functional with spesh and JIT, as well, but I think that has already been the case ~2 weeks ago.
  • The most obvious thing still missing from the jit is support for a bunch of different “param” accessor ops. brrt is still looking for the most efficient way to jit them. After those are done, I expect a tremendous amount of frames that are currently bailing out will be jitted (either “further” or “to completion”).
  • Jnthn implemented asynchronous process spawning and communicating to stdin/stdout/stderr on MoarVM, which is a feature he’ll be giving a talk on during the YAPC::EU in Sofia at the end of this week.
  • Froggs has continued pushing Slang support in Perl 6 further and further. His port of v5 from being NQP code to being pure Perl 6 code is progressing nicely and has just reached the point where the Perl 6 version passes a third of the tests the NQP version used to pass.
  • lizmat nuked eval_dies_ok and eval_lives_ok from the specs, as they are likely to be used incorrectly (local subs and variables are not always available in the eval’d code, as the optimizer is free to turn lexicals into locals and thus hide them from eval). She’s now busy replacing all the uses with direct uses of EVAL in the spectests. A few spectests have already turned out to have been relying on some eval’d code dying, but the code was dying for the wrong reason, thus giving a false positive test result.
  • Jnthn changed both the annotation API and the children node API of the QAST objects, causing fewer allocations (no childrens array for leaf nodes, no annotation hash for annotation-less nodes).
  • Also, Jnthn threw out a “middle man” datastructure that was created and shortly thereafter thrown away and turned into a different datastructure on every single successful match or sub-match. An equivalent change is still needed in rakudo, but having the change in NQP already makes build times better.
  • On top of that, more kinds of things are now lazily deserialized in MoarVM, making start-up faster and cheaper on RAM yet again.
  • sergot posted a nice nearly-end-of-GSoC post on his HTTP::UserAgent and friends project, which includes a bunch of documentation (to be found in the actual HTTP::UserAgent repository itself).
  • I re-ordered code in the library loader that used to cause a whole bunch of stat calls on locations in the libpath for libraries that were already cached! Now the cache is looked at before the filesystem is searched through.
  • japhb has been improving perl6-bench for better windows compatibility and made the history comparison less prone to exaggerated scores from some of the microbenchmarks – the JVM implementation makes really, really short work of empty loops!
  • hoelzro is continuously working on improving the ability to attach documentation to objects like classes, roles, attributes, … in the S26-WHY branch.
  • pmurias is working on a refactor/rewrite of the nqp-js code emitter which leads to better code output, support for “source maps” and other nice things.
  • nwc10 has been a tremendous help for regularly running the newest MoarVM and JIT changes through ASAN (address sanitization) and valgrind for us.
  • lichtkind has resurfaced and worked on the tablets again.
  • TimToady gave pushing a list into another a big speed boost for bigger lists by un-lazying the list that is to be pushed.
  • Another change TimToady did was to give dynamic variables a little cache so that lookups would happen quite a bit quicker and be cheaper in general.
  • donaldh found out what made one of the later stages in Rakudo-JVM’s (and nqp-jvm’s) compilation processes so terribly slow and fixed it. In rakudo’s core setting compilation, stage jast used to take as long as stage parse, now it’s down to about a tenth of that. This impacts every single run of rakudo and nqp, not just the initial build.
  • psch improved some error messages in regexes. Also, some improvements to m:g// and s:g/// are in the pipeline for a later date.
  • PerlJam fixed some problems with floating point numbers and scientific notation in sprintf.

So that’s pretty much it. A whole bunch of nice changes, I must say.

A release is coming up this week, on Thursday if I remember correctly. Unfortunately last month’s Rakudo Star release isn’t finished yet; we’ll see if it will be released later or if the work that has gone into preparing last month’s Star release will just be ported over to the upcoming version directly.


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-08-18 13:10:06

Hi!

At the beginning I want to thank to my mentor, FROGGS, he has done a lot of work during this project, helped and taught a lot.

Thanks to moritz as well, he did really great work as the support mentor.

There are only three days left, so this post shows the end status of my GSoC project.

The Status

TLS support (OpenSSL)

Yay, Perl 6 supports TLS now, awesome, it was tested on linux_86-64 machines and works well! It uses OpenSSL, so you must have it installed.

This module provides low level bindings to libssl's functions, but it has a higher level class as well, we can use it to handle SSL connection less complicated.

IO::Socket::SSL

I have written a high level module to easily use OpenSSL bindings, it is called IO::Socket:SSL and works just like p6's IO::Socket::INET built-in module.

HTTP::UserAgent

All the HTTP::* modules which are provided by HTTP::UserAgent repo works and does it well. If you want to know more about them just take a look at the documentation in every .pm6 file.

Other modules

To create all this SSL stuff there was a neccessary to write some other modules, not planned at the beginning.

Here they are.

DateTime::Parse

As the name suggests it is a DateTime parser. So far it parses following datetime formats:

  • rfc1123
  • rfc850
  • asctime

It was needed to parse datetimes from the header of HTTP messages. Another usage is presented in HTTP::Cookies, where we use it to remove expired cookies.

Encode

Decoding only, so far. This project uses it to decode the content of a HTTP message.

The End

Hmm, it is not the end yet! Besides there are three days left, it is not the end because I will continue my work afterwards. :)

In the nearest future, I want to test it on other boxes (you can help me here! Would be great) and make it work there (if it wouldn't). The next thing I want to do is to speed it up, optimize, do some cleaning, etc.

This project was very interesting, gave me a lot of fun and I learn a lot.

Thank you for reading my GSoC posts and I encourage you to visit my blog more often. I want to keep posting. :)

Filip Sergot | filip.sergot.pl | 2014-08-13 00:00:00

I compensate for the infrequency of my blog posts with their length. Or the other way around. Anyway, I have some good news to report, so let's do that first. The JIT branch of MoarVM (moar-jit), which has been my work for the last few months, has been merged into the main branch just this morning, after we've found it not to crash and burn on building nqp, rakudo, and running spectests. This means that, with some fiddling, you too can now run JIT for MoarVM, NQP, and Perl 6. World domination for camelia!

Well, unless we find new bugs, of course. We've had plenty of these the last few weeks, and most of them all had the same cause, which can be summarized simply: semantic use of the interpreter process counter is not really compatible with JIT compilation. But maybe that requires some elaboration.

Simply put, during interpreting MoarVM sometimes needs to know exactly where in the program the interpreter is. This happens, for example, when using handlers, which is the mechanism MoarVM uses for try-catch constructs. In the following frame, for example, lines 1 and 4 would be the start and end of the handlers, and the block within CATCH would be invoked if the do-something-dangerous() would actually throw something. On the other hand, if another-dangerous-thing() were to throw something, the CATCH block should clearly not catch it.


0: sub a-handler-frame() {
1: try {
2: do-something-dangerous();
3: CATCH { say($_); }
4: }
5: another-dangerous-thing();
6: }

To determine what should be done in the event that either of these dangerous functions raises an error, MoarVM inspects the current process counter of the frame, and determines whether or not the try block applies. And this works very well in practice, so long as MoarVM is actually interpreting code. When the same code is compiled - as in, JIT compiled - the interpreter is merely used for entering into the JIT code, and never changes as we move through the frame. So the interpreter instruction pointer can no longer be used to tell where we are. As a result, exception handling didn't quite go smoothly.

A similar problem existed with dynamic variable caches. These are used to make lexical lookup of dynamic variables cheaper by caching them locally in the frame. It is not normally necessary to know where the interpreter is in the frame, except when we're dealing with inlined frames. Put shortly, before inlining the inlined frames may have had different ideas of the contents of the dynamic lexical cache. Because of this, MoarVM needs to know in which of the inlined frames we're actually working to figure out which cache to use. (NB: I'm not totally sure this explanation is 100% correct. Please correct me if not). So again when running the JIT MoarVM couldn't figure this out, and would use the wrong cache.  By the way, I should note that jnthn has been extremely helpful in finding the cause of this and several other bugs.

Because the third time is a charm (as I think the saying goes), another, very similar, version of the same bug appeared just a bit earlier with deoptimization. As I had never implemented any operation that caused a 'global deoptimization', I naively thought I wouldn't have to deal with it yet. After all,global deoptimization means that all frames in the call stack would have to be deoptimized. And you may have guessed it, but to do that correctly, you'll have to know precisely where you are in the deoptimising frame. This one was not only found, but also very helpfully fixed by jnthn.

All this implied that it became necessary for me to just solve this problem - where are we in the JIT code - once and for all. And in fact, there already existed parts of a solution to this problem. After all, the JIT already used a special label to store the place we should return too after we'd invoke another frame. So to determine where we are in the program, all we need to do is map those pointers back to the original structures that refer to them - that is to say, the inlined frames, the handlers, and the deoptimization structures. So it was done, just this week. I'd be lying if I said that this went without a hitch, because especially exception handling presented some challenges, but I think this morning I've ironed out the last issue. And because today is - according to the GSoC 2014 timeline - the 'soft pencils-down date' - in other words, the deadline - we felt it was time to merge moar-jit into master and let you enjoy my work.

And people have! This gist shows the relative speedup caused by spesh and JIT compilation in an admittedly overly simple example. As a counterexample, the compilation of CORE.setting - the most time-intensive part of building rakudo - seems to take slightly longer while using JIT than while. Still, tight and simple loops such as these do seem to occur, so I hope that in real-world programs the MoarVM JIT will give better performance. Quite possibly not as good as the JVM or other systems, certainly not as good as it could be, but better than it used to be.

There is still quite a lot to be done, of course. Many instructions are not readily compiled by the JIT. Fortunately, many of these can be compiled into function calls, because this is exactly what they are for the interpreter, too. Many people, including timotimo and jnthn, have already added instructions this way. Some instructions may have to be refactored a bit and I'm sure we'll encounter new bugs, but I do hope that my work can be a starting point.

Bart Wiegmans | brrt to the future | 2014-08-11 15:16:33

Even though last week’s headline claimed it was about weeks 30 and 31, the 31st week was actually this last one! D’oh! Calendars are hard :)

Anyway, here’s your mostly-weekly fix of changes:

  • Jnthn found a bunch of optimization opportunities in the optimizer (hehe), making it run a bunch faster.
  • Another big win was jnthn finding stuff we did repeatedly or stuff we did just to throw away the results again:
    • When trying to match an <?after …>, the regex compiler would flip the target string every time the <?after > was hit. Now the flipped target string gets cached.
    • Every time an NFA got evaluated (which happens whenever we reach a “|” alternation in a regex or a proto token that has multiple implementations. i.e. very often) we took a closure. Jnthn re-wrote parts of the code that works with the NFA cache and managed to shave off a dozen GC runs from our CORE.setting build!
    • Another 800000 allocations went away after jnthn let the alternation index array be generated statically rather than every time the alternation is hit.
    • Improved handling of sink (which is what handles things like failure objects being thrown and side-effects invoked in certain conditions; only thing you need to know is it gets emitted a whole lot during AST generation) leads to smaller ASTs and gives our code-gen opportunities to output better code.
    • even more improvements I didn’t mention!
  • A little piece of work jnthn suggested I’d do is letting our bytecode specializer only mark a guard (like “please make sure this object is concrete” or “please make sure the type of this object is $FOO”, which would cause a deoptimization if they fail) as necessary, when the optimization that would rely on the guard in question actually was done. Unfortunately, we don’t have before/after measurements for how often specialized bytecode deoptimized …
  • On MoarVM, simple exception throws can now be turned into simple goto operations. This also includes things like redo/continue/last in loops.
  • I implemented specializations for the smart numify and smart stringify operations on MoarVM; Something that happens especially often is numifying lists or hashes, which now turns into a simple elems op (which our specializer can and will further simplify)
  • I also implemented a few very simple ops for the jit, so that brrt could spend more time doing the hard bits.
  • Another thing jnthn worked on is making allocations and deserializations lazy. This helps improve the start-up time of the repl and all programs and also improves the  memory-usage of perl6 programs that don’t use much of the CORE setting, which would probably be most.
  • Vendethiel (Nami-Doc on github) wrote a nice guide to perl6 that became included on learnxinyminutes. Kudos!
  • Froggs and lizmat worked further on CompUnitRepo and module-loading-related things
  • a well-timed “well volunteered!” motivated btyler to start a C binding for the jansson JSON parsing/manipulating/outputting library. In very performance-critical situations, especially when you only use parts of a very big JSON document, this should give you better performance than JSON::Tiny. However, JSON::Tiny is a part of the benchmark suite we’re running all the time, meaning we’ll react to performance regressions switfly and try to figure out what makes it slower than it has to be.

Now here’s some numbers to compare today’s state with the state at the time of the last release:

  • 30 seconds for a NQP build (used to be 37)
  • 57 seconds for a Rakudo build (used to be 1:15)
  • 0.02s and 13 MB maxrss to fire up a Rakudo REPL (used to be 0.04s and 35 MB)
  • 0.2s and 114 MB maxrss for a “say 1″ (used to be 0.27s and 135 MB)
  • now it is: 584.95user 75.88system 3:44.71elapsed 294%CPU for a full spectest run with 4 test_jobs
  • used to be: 765.43user 89.06system 4:40.32elapsed

There’s still more opportunities to deserialize things lazily upon first use in MoarVM which ought to give us lower baseline memory usage and spread out startup time “more evenly” (and remove it when possible).

The baseline memory usage of our Perl 6 compilers has always been something that annoyed me. MoarVM gave us about a 2x memory saving compared to Parrot and now we’ve started work on making the memory usag better. I’m excited! :)

Sadly, brrt has been plagued with very stealthy bugs all week, so progress was kind of slow. What he did manage to finish is support for indexat and jumplist, which is needed in most regexes Also, sp_findmethod, findmeth, findmeth_s, indexat, getdynlex, … are done now, but temporarily commented out for bug-hunting purposes.

That’s it from me for this week. I’m already looking forward for the cool stuff I’ll be able to highlight next week :)


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-08-05 00:12:06

Hey there,

last week I didn’t find enough exciting stuff to talk about and there was a release due for the middle of the week, so I decided to just skip a week. Here’s what happened in both weeks combined:

  • There was a Rakudo release and avuserow is currently quite close to finishing up a Rakudo Star release as well. Since Rakudo Star was supposed to be a quarterly release anyway, we’re still on track :P
  • on the JIT side of things, thanks to brrt MoarVM got the following stuff improved:
    • ops that want to invoke can now be handled, including the smart_stringify and smart_numify ops, which were very common in our test code (we’ve been using the CORE.setting to get “real-world” code.)
    • the jit learned how to cope with On-Stack-Replaced frames.
  • thanks to a test case by TimToady, jnthn was able to fix a couple of memory leaks in MoarVM’s spesh framework (found with valgrind).
  • jnthn also came up with a bunch of optimizations that benefit regex/grammar matching, start-up time, and junctions with certain operators.
  • Inlining, which is a major source of speed improvements on MoarVM, has been bailing out if it saw the lexotic op in a frame (which is used for example for returning, but also for exceptions) or when a frame had handlers (exceptions handlers for example). Those occur very often in code generated by Rakudo, so Perl 6 code didn’t benefit from inlining as much. Both of these have been made to cooperate with inlining now, though.
  • extops, meaning ops that are loaded into the VM from a dynamic library rather than being built into it, can now participate in the facts discovery and bytecode specialization phases of spesh. This is especially nice for Perl 6’s containers, as the decont operation can now be as efficient as a simple pointer indirection.
  • Froggs has reached the point where a v5 implemented in Perl 6 code (rather than NQP code as it used to be) runs again.
  • Froggs also made warnings print line numbers and file names again on JVM and MoarVM.
  • Froggs implemented the “charrange” method for our regex compiler, which I helpfully stubbed in about 9 months ago …
  • Coke spent a lot of time making sure non-passing tests were properly fudged and had tickets in our RT.
  • dwarring has made a bunch more tests corresponding to advent calendar posts and created at least one RT ticket for those, leading to (actually: following) a discussion about how junctions should interact with smart matching.
  • pmurias has started working on NQP-Js again, our javascript backend for NQP (and rakudo in the future). There’s a todo file to which low hanging fruit will be added for people who are interested in getting into this project.
  • Mouq has been working on support for semilists (“semicolon-separated lists”) in hash and list subscripts, which are going to help a lot with multi-dimensional arrays and such.
  • hoelzro improved our test running of rakudo a bunch, among other things to make output more useful
  • japhb built even more improvements to perl6-bench!
  • Froggs made it finally possible to use variables and code-blocks as quantifiers in regexes, i.E. / foo ** $a /. This is probably a feature many people have missed so far! It works on all our backends, as well!
  • Froggs also improved a bunch of other things, like how words are split inside our quotewords (which we spell < >).
  • Instead of writing this post, I spent some time implementing more and more ops for moar-jit, following a very helpful bail report by moritz.
  • Another thing I implemented (after jnthn assured me it would be simple) is less conservative marking of which guards we end up relying on when specializing code; 15 guards removed for the core setting may not be terribly much, but each of these may correspond to any amount of unnecessary bail-outs from specialized bytecode back to unoptimized bytecode.

Right now, some more bugs in the jit and spesh are being ironed out. After that I’ll get to find out what happened to the about 500 frames that used to run into some unimplemented operation I implemented in the mean time; Each of those can either run straight into a different unimplemented op, or get compiled successfully. Either case is interesting and they are easy to distinguish by just looking at the overall numbers of bails.

Anyway, it’s nearing midnight and I still want to get this post out the door on monday local-time. So here you go, an unpolished blog post :)

Hope your week’s going to be good; I’m sure it’ll be a nice week for us Sixers :)


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-07-28 21:59:00

Once upon a time, way too long ago, I blogged, and notified you of my progress. There's been plenty progress since then, so it was time to write again. Since I've last wrote, I've added support for invocations, 'invokish' instructions - including decontainerization and object conditionals, which appear really frequently - and OSR. I've also had to fix bugs which crept in the code but which were never properly tested before, due to the fact that we typically need to implement a whole set of ops before any particular frame is compiled, and then if those frames break it is unclear which change caused it. I'll talk about these bugs a bit first and then about my next challenges.

The first bug that I fixed seemed to have something to do with smart numification specifically. This is an example of a so-called 'invokish' instruction, in which an object is coerced into a primitive such as a number or a string. Some types of objects override the default methods of coercion and as such will need to run code in a method. Because the JIT doesn't know beforehand if this is so - many objects are coerced without invoking code at all - a special guard was placed to ensure that the JIT code falls out into the interpreter to deal with an 'unexpected' method invocation. Freakishly, seemed to work about half of the time, depending on the placement of variables in the frame.

Naturally, I suspected the guard to be wrong. But (very) close inspection in gdb assured me that this was not the case, that the guard in fact worked exactly as intented. What is more, usually JIT bugs cause unapologetic crashes, segmentation faults, bus errors and the like, but this bug didn't. The code ran perfectly, just printing the wrong number consistently. Ultimately I tracked it down to the differences in parameter passing between POSIX and Windows. On both platforms, the first few parameters to a function are passed in registers. These registers differ between platforms, but that's easy enough to deal with using macro definitions. In both platforms, floating-point arguments are passed via the 'SSE' registers as opposed to the general-purpose registers. However, the relative positions are assigned differently. On windows, they are assigned in the order of the declaration. In other words, the following function declaration


void foo(int i, double d, int j, double f);


assigns i to the first general-purpose register (GPR), d to the second SSE register, j to the third GPR, and f to the fourth SSE register. On POSIX platforms (Mac OS X, Linux, and the rest), they are first classified by type - integer, memory, or floating point - and then assigned to consecutive registers. In other word, i and j are passed in the first and second GPR, and d and f are passed in the first and second SSE register. Now my code implemented the windows behavior for both platforms, so on POSIX, functions expecting their first floating point argument in the first SSE register would typically find nothing there. However, because the same register is often used for computations, there typically would be a valid value, and often the value we wanted to print. Not so in smart numification, so these functions failed visibly.

The second bug had (ultimately) to do with write barriers. I had written the object acessors a long time ago and had tested the code frequently since then, so I had not expected anything to be wrong with them. However, because I had implemented only very few string instructions, I had never noticed that string slots require write barriers just as object slots do. (I should have known, this was clear from the code in the interpreter). Adding new string instructions thus uncovered a unused code path. After comparing the frames that where compiled with the new string instructions with those without, and testing the new string instructions in isolation, I figured that the accessors had something to do with it. And as it turned out, they had.

The third bug which puzzled me for over a week really shouldn't have, but involved the other type of object acessors - REPR accessors. These accessors are hidden behind functions, however these functions did not take into account the proper behavior on type objects. Long story short, type objects (classes and the like) don't have any attributes to look up, so they should return NULL when asked for any. Not returning NULL will cause a subsequent check for nullity to pass when it shouldn't. Funnily enough, this one didn't actually cause a crash, just a (MoarVM) exception.

I suppose that's enough about bugs and new features though, so let's talk about the next steps. One thing that would help rakudo perl 6 performance - and what jnthn has been bugging me about for the last weeks :-) - is implementing 'extops'. In short, extops are a way to dynamically load new instructions into the interpreter. For the interpreter, they are just function calls, but for the JIT they pose special challenges. For example, within the interpreter an extop can just branch to another location in the bytecode, because this is ultimately just a pointer update. But such a jump would be lost to the JIT code, which after all doesn't know about the updated pointer. Of course, extops may also invoke a routine, and do all sorts of interesting stuff. So for the JIT, the challenge will not be so much executing the extops as figuring out what to do afterwards. My hope is that the information provided about the meaning of the operands - that is, whether they are literal integers, registers, or coderefs - will provide sufficient information to compile correct code, probably using guards. A similar approach is probably necessary for instructions that (may) throw or catch exceptions.

What's more directly relevant is that moar-jit tends to fail - crash and burn - on windows platforms. Now as I've already mentioned, there are only a few differences between windows and POSIX on assembly level. These differences are register usage and calling conventions. For the most part, I've been able to abstract these away, and life was good (except for the floating point functions, but I've already explained that at length). However, there are only so many arguments that can fit in registers, and the rest of them typically go to the stack. I tacitly assumed that all arguments that are pushed on stack should be 64 bits wide (i.e. as wide as a register). But that's not true, smaller arguments take fewer bits as is needed. The ubiquitous MVMint32 type - an integer 32 bits wide - takes only 4 bytes. Which means that a function expecting 2 32 bit numbers on stack would receive the value of only one, and simply miss the other. As POSIX has 6 GPR's available, and Win64 only 4, it is clear this problem only occurs on windows because there aren't any functions with more than 7 arguments.

Seems like a decent explanation, doesn't it? Unfortunately it is also wrong, because the size of the argument only counts for POSIX platforms. On windows, stack arguments are indeed all 64 bits wide, presumably for alignment (and performance) reasons. So what is the problem, then? I haven't implemented the solution yet, so I'm not 100% sure that what I'm about to write is true, but I figure the problem is that after pushing a bunch of stack arguments, we never pop them. In other words, every time we call a function that contains more than 4 parameters, the stack top grows a few bytes, and never shrinks. Even that wouldn't be a problem - we'd still need to take care of alignment issues, but that's no big deal.

However, the JIT happens to use so called non-volatile or callee-save registers extensively. As their name implies, the callee function is responsible for either restoring these registers to their 'original' value upon exit, either by saving these values on stack or by not using them at all. Contrary to popular opinion, this mechanism works quite well, moreover many C compilers preferentially do not use these registers, so using them is quite cheap in comparison to stack usage. And simple as well. But I store and restore them using push and pop operations, respectively. It is vital the stack top pointer (rsp register) is in the right place, otherwise the wrong values end up in these registers. But when the stack keeps on growing, on windows as well as on POSIX systems, the stack pointer ends up in the wrong place, and I overwrite the callee-save register with gibberish. Thus, explosions.

From my review of available literature - and unfortunately, there is less literature available than one might think - and the behavior of C compilers, it seems the proper solution is to allocate sufficient stack space on JIT code entry, and store both the callee-save registers as well as the stack parameters within that space. That way, there's no need to worry about stack alignment issues, and it's always clear just where the values of the callee-save registers are. But as may be clear from this discussion, that will be quite a bit of work, and complex too. Testing might also be challenging, as I myself work on linux. But that's ultimately where VM's are for :-). Well, I hope to write again soon with some good news.

Bart Wiegmans | brrt to the future | 2014-07-25 08:14:55

As a dreamer of dreams and a travelin' man,
I have chalked up many a mile.
Read dozens of books about heroes and crooks,
And I've learned much from both of their styles.
-- Heard playing in Margaritaville bar,
in Orlando after YAPC::NA::2014.

On behalf of the Parrot team, I'm proud to announce Parrot 6.6.0, also known
as "Parrothead". Parrot (http://parrot.org/) is a virtual machine aimed
at running all dynamic languages.

Parrot 6.6.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/supported/6.6.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.6.0 News:
- Core
+ Optimized method call overhead at compile-time in pmc2c directly
to avoid run-time overhead. Less temp. PMC's, less branches and
avoiding at least 2 costly C functions per method call.
+ New arity warning:
"wrong number of arguments: %d passed, %d expected" [GH #1080]
- Build
+ Workaround libffi-3.1 upstream bug in ffi.h [GH #1081]
+ Expand pkg-config make macros in auto::libffi [GH #1082]
- Tests
+ Fix t/pmc/filehandle.t on cygwin with newer Cwd >= 3.40 [GH #1077]
- Community
+ Our GSoC student passed the project midterm, having made great progress.
Congratulations to Chirag Agrawal!
+ More parrot-bench numbers at https://github.com/parrot/parrot-bench,
now using gnuplot for all releases from 1.8.0 - 6.6.0, amd64 + -m32


The SHA256 message digests for the downloadable tarballs are:
6d21d3b733d980ab7cb8ee699c59e2b782d8a9c8c0e2cb06d929767e61024ace parrot-6.6.0.tar.gz
08e9e02db952828f6ab71755be47f99ebc90894378f04d8e4d7f3bc623f79ff5 parrot-6.6.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Our next scheduled release is at 19 Aug 2014.

Enjoy!

Perl 6 Announce | perl.perl6.announce | 2014-07-16 08:55:39

Even though the previous post was a bit late and so this one covers a few days less, there’s interesting things to report. This week, raiph sent me a little summary of IRC activity again, which made it much easier for me. So thanks!

And now let’s see about those developments:

  • hoelzro has continued work on the pod documentation attaching to entities (methods, classes, variables, …)
  • Will Coleda worked on cleaning up our spectest suite a bunch.
  • brrt has continued work on the JIT, namely making invocations work (both the spesh-improved “fast” invocation and the regular “slow” invocation).
  • There is now also a log that shows what opcodes end up being jitted and which opcodes cause the JIT to bail out due to NYI or other reasons.
  • Using this log, brrt concentrated on a bunch of opcodes that commonly cause bails.
  • Even I got some work in this week, namely having been inspired to hack on the JIT a bit myself; a bunch of ops can be compiled to regular C function calls, and those are sufficiently easy to add. I’ve added support for:
    • checkarity (which is responsible for giving run-time errors for wrong numbers of arguments if we couldn’t determine that at compile time)
    • push_o, pop_o, shift_o and unshift_o (to access lists and such)
    • atpos_o (getting an object in a list by index)
    • getattr_s (accessing a named attribute of an object; with help from brrt).
  • Additionally, jnthn pointed out that deconts on containers that are quite simple (just a pointer indirection) can be spesh’d to look exactly like a spesh’d attribute access, so I was able to add support to spesh to simplify some deconts (which is an operation that used to cause the jit to bail out extremely often).
  • The top ops causing the JIT to bail during the compilation of CORE.setting are now:
    • “graphs_s” (which seems to be in every piece of code that follows “getattr_s” + “flattenropes”)
    • “ifnonnull” (100% of the “atpos_o” bails turned into “ifnonnull” bails)
    • param_rp_o (grab a required positional parameter)
    • newlexotic (related to exception handling)
    • the decont ops that were not spesh’d away.
  • Sadly, MoarVM’s JIT compiler isn’t invoked at all in the case of On-Stack-Replacement optimized code, so none of our current benchmarks will show any change between JIT and no-JIT.
  • jnthn has started on the long-awaited rewrite of MoarVM’s string handling. Here’s a benchmark from jnthn’s machine comparing last month’s release of MoarVM, the strref branch of MoarVM and last year’s rakudo-parrot. (so no JIT yet). Across the board there’s improvements, but the most important improvement can be seen in the benchmarks that have “concat” in their names. These are the ones that concatenate strings.
  • jnthn greatly improved the metaoperator parsing in rakudo. The parser used to barf when it saw rather unwieldy operators with disambiguating brackets in them and set operations and such, for example (|)= or metaops with reductions like [[*]]=.
  • jnthn also merged the “lex2loc” branch that allows the Optimizer to turn lexical variables into locals if we can prove that they’re not accessed outside the frame they’re in. All backends benefit from this.
  • psch finished the implementation of the translate operator “tr///” and its return value.
  • thanks to japhb, more perl6-bench improvements landed.
  • A whole bunch of work has been put into improving the POD to HTML rendering by Mouq, lue and atroxaper, like the Pod::SAX, Pod::Walker, and an HTML renderer based on both of these modules.
  • retupmoca added a module for interfacing with RabbitMQ to the ecosystem, called Net::AMQP.
  • lizmat worked some more on S22 and the related tests and did some more discussion about details with the community.
  • lizmat and retupmoca fixed problems for Supplies that are .tap’d multiple times.
  • masak added a module Data::Pretty that will give common things you might want to “say” a friendlier output.
  • sergot posted about adding both a high level and a low level wrapper for OpenSSL on his blog.

And if you’re interested in getting into Perl 6 Module Development, you could adopt bbkr’s MongoDB related modules BSON and MongoDB.

The next releases are going to happen soon-ish. On Thursday, MoarVM is going to be released and the NQP + Rakudo release is going to follow this week, too.

Thank you for reading and may your week be filled with adorable kitties (or something equivalently cuddly, if you’re allergic).


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-07-14 17:05:16

Hi!

I can proudly annouce that Perl 6 has OpenSSL bindings available now! \o/

We have low level bindings and a high level wrapper, you can simply write:

use OpenSSL;

my $ssl = OpenSSL.new;
# set up the connection here
# ...
$ssl.set-fd($fd);

$ssl.connect;

$ssl.write("GET /\r\n\r\n");
say $ssl.read(100);

$ssl.shutdown;

But if you don't want to use high level wrapper, you can use OpenSSL particular modules, like:

  • OpenSSL::SSL - contains declaration of SSL struct and functions to hadnle it
  • OpenSSL::Ctx - contains declaration of SSL_CTX struct and functions to handle it
  • OpenSSL::Cipher and so on... (see this)

The thing which is not clear above is the "set up the connection" section. I've wrote IO::Socket::SSL which provides high level API, the same as IO::Socket::INET, for SSL connections. It does "set up the connection" for us, using written in C client_connect function which returns connection's file descriptor, it's needed by $ssl.set-fd($fd), it is because OpenSSL wants to own the connection.

int client_connect(char *hostname, int port) {
    // ...
    return handle; // fd
}

We can do the same but in another way, we just have to pass a file descriptor to .set-fd($).

Using IO::Socket::SSL, we can write:

use IO::Socket::SSL;

my $sock = IO::Socket::SSL.new(:host<filip.sergot.pl>, :port(443));
$sock.send("GET / HTTP/1.1\r\nHost: filip.sergot.pl\r\n\r\n");
say 'Response: ', $sock.recv;

Simple as that. :)

Anyway... The most exciting thingis, that HTTP::UserAgent can handle SSL now! Some bugs are known but it works in some cases.

As I wrote earlier, we are able to connect to sites which use SSL like this:

say $ua.get('https://filip.sergot.pl').content;

... above example returns 403 error just because my site doesn't use SSL but has this port (443) open.

What's next?

I want to get rid of bugs, implement server stuff of OpenSSL, implement more features and do some cleaning after all.

There is only one month left! :)

Filip Sergot | filip.sergot.pl | 2014-07-14 00:00:00

Hey there! I’ve been a bit busier and distracted than usual this monday and tuesday. Sadly, I can’t really write the posts on sunday night and publish them when I get out of bed, as many nice things usually happen between sunday evening and monday noon :) Anyway, raiph combed through the IRC logs for me and collected a whole bunch of things. Here’s my rendition of the recent happenings, including up to today, as I’m running so late:

  • lizmat led the discussion about the behavior of writing “my %h = { … }”, which is now considered assigning an itemized hash to a hash variable and gives a deprecation warning. Programmers might assume this would flatten the right hand side into the hash, but we found that behavior too magical. Thus, lizmat took care of that.
  • psch has been working (for a bit longer than just this past week, actually) on patches to teach rakudo to properly work with tr///, the “translation” operator. It’s supposed to return the number of characters changed in the source string, which was the most complicated part so far, it seems.
  • FROGGS taught all backends to do pointer arithmetic on CPointer repr’d things, allowing NativeCall to handle array-like things better.
  • FROGGS and sergot also implemented “NativeCast” for the NativeCall module, allowing a CPointer to be cast to any type we know. Apart from legit use cases, this allows some scary, scary stuff to be done :) (see also)
  • The above work is an important stepping stone towards proper TLS support as well as supporting function pointers (for an OpenGL implementation, for example, when you want to have GL extensions)
    • Function Pointers need a bit more work in the perl6 Grammar (actually the Actions) so that we have proper type annotations for &vars (like my &funcptr:(int, int –> int) or something)
  • lizmat implemented and then reverted the “is cached” trait on methods, which is a bit harder than doing it for subs, because it has to factor in the “self”, as well. It’s not yet clear what the semantics are supposed to be.
  • lizmat has also continued working dilligently on the S11 and S22 things, among others the CompUnitRepo classes. I saw FROGGS do a few things in this area, too.
  • masak and krunen did a little “ping-pong programming session” to implement “emmabot”, a bot that should report on the daily ecosystem and Rakudo Star module testing results. Here’s the section of the irclog and here’s the repository for the bot. I didn’t pay close attention to the conversation, but it might be a good example of how to do Behavior Driven Development :)
  • Speaking of the daily ecosystem module testing results, a result page now lives on one of our servers and can be reached here, thanks to moritz, coke and colomon.
  • hoelzro invested a bunch of time into making the handling of #| (that is, attaching pod comments to variables, methods, classes, subs, …) to spec in rakudo and the test suite. Turned out to be quite a bit hairier than it looks from afar!
  • dwarring continued to improve our test coverage of the advent calendar posts.
  • japhb landed more improvements to perl6-bench, including extracting a bunch of smaller benchmarks from rc-forest-fire to help figure out why rakudo is so slow at it.
    • In doing so, we found out that rakudo easily beats perl5 at Big Rational Number arithmetic. Pretty neat!
  • jnthn continued his usual work: stability and performance. Among other things, rakudo now turns more lexicals into local variables if it can and variables gained a new scope (“typevar”) that is interesting for optimizing roles and such.
  • jnthn helped japhb figure out what changes are needed to get perl6-bench to run well on windows. Here’s a benchmark result from his machine.
  • jnthn has made MoarVM’s bytecode specializer throw away guards that the specialized code ends up not relying upon. This reduces the number of times we deoptimize unnecessarily.
  • A few fun things on RosettaCode: TimToady’s “draw a clock” implementation with braille-based graphics and sml’s parser for the Multiple Dwelling Problem.
  • tadzik tried out MoarVM on his phone. Sadly, cross-compiling the MoarVM bytecode isn’t as trivial as it ought to be, as it contains file paths that would need fixing up…
  • Mouq is working on a Pod::Walker module to make creating Pod-To-* converters easier. In order to test it, he’s also refactoring Pod::To::HTML to be based upon it.
  • atroxpaper is working on a different Pod Walker, namely Pod::SAX. It will provide a stream-like/callback-like API to processing Pod files.
  • retupmoca fixed up the ecosystem to point IRC::Utils, Date::WorkdayCalendar, TestML and YAML at forked repositories on github that had pull requests open for longer than a month.
  • zengargoyle built a module for fortune files, grondilu started work on a module for “chess related stuff”.
  • Coke dilligently made sure everything’s all right with the daily test runs. For example, att one point, rakudo.parrot had failed 1800 spectests (due to mostly a single problem).
  • JimmyZ updated MoarVM’s packaged uthash to the newest version.

Here’s an update on the GSoC stuff:

  • brrt recently blogged about his progress and has been working mostly on refactoring the code base and supporting invocation in the jit since then.
  • sergot has started working on OpenSSL related things. The repositories can be found on his github.
  • Unfortunately I have either not heard from the other projects or forgotten what I’ve heard, but I do recall that all our students have passed the mid-term evaluations!

Apologies again for the much belated post. Hope y’all have a nice week :)

Update: a few inaccuracies were fixed.


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-07-09 11:12:35

So, it seems I haven't blogged in 3 weeks - or in other words, far too long. It seems time to blog again. Obviously, timotimo++ has helpfully blogged my and other's progress in the meantime. But to recap, since my last blog the following abilities have been added to the JIT compiler:

  • Conditionals and looping
  • Fast argument list access
  • Floating point and integer arithmetic
  • Reading and writing lexicals, and accessing 'world values'.
  • Fast reading and writing of object attributes
  • Improved logging and bytecode dumping.
  • Specialization guards and deoptimisation
The last of these points was done just this week, and the problem that caused it and the solution it involves are relevant to what I want to discuss today, namely invocation.

The basic idea of speculative optimization - that is, what spesh does - is to assume that if all objects in the variable $foo have been of class Foobar before, they'll continue to be FooBar in the future. If that is true, it is often possible to generate optimized code, because if you know the type of an object you typically know its layout too. Sometimes this assumption doesn't hold, and
then the interpreter must undo the optimization - basically, return the state of the interpreter to where it would've been if no optimization had taken place at all.

All the necessary calculations have already been done by the time spesh hands the code graph over to the JIT compiler, so compiling the guards ought to be simple (and it is). However, an important assumption broke because of it. The MoarVM term for a piece of executable code is a 'frame', and the JIT compiler compiles whole frames at a time. Sometimes frames can be inlined to create bigger frames, but the resulting code always represents a single new frame. So when I wrote the code responsible for entering JIT-ted code from the interpreter, I assumed that the JIT-ted code represented an entire frame, at the end which the interpreter should return control to its caller.

During deoptimization, however, the interpreter jumps from optimized, type-specific code, to safe, unoptimized 'duck-typing' code. And so it must jump out of the JIT into the interpreter, because the JIT only deals with the optimized code. However, when doing so, the JIT 'driver' code assumed that control had reached the end of the frame and it ought to return to the caller frame. But the frame hadn't completed yet, so where the caller had expected a return value there was none.

The solution was - of course - to make the return from the current frame optional. But in true perl style, there is more than one way to do that. My current solution is to rely on the return value of the JIT code. Another solution is to return control to the caller frame - which is, after all, just a bit of pointer updating, and encapsulated in a function call, too - from the JIT code itself. Either choice is good, but they have their drawbacks, too. Obviously, having the driver do it means that you might return inappropriately (as in the bug), and having the JIT code might mean that you'd forget it when it is appropriate. (Also, it makes the JIT code bigger). Moreover, the invoked frame might be the toplevel frame in which case we shouldn't return to the interpreter at all - the program has completed, is finished, done. So this has to be communicated to the interpreter somehow if the JIT-code is considered responsible for returning to the frame itself.

The issues surrounding a JIT-to-interpreter call are much the same. Because MoarVM doesn't 'nest runloops', the JIT code must actually return to the interpreter to execute the called code. Afterwards the interpreter must return control back to the JIT code. Obviously, the JIT-ted frame hasn't completed when we return to the interpreter during a callout, so it can't return to its caller for the same reason. What is more, when calling out to the interpreter, the caller (which is JIT code) must store a return address somewhere, so the JIT driver knows where to continue executing after the callee returns.

I think by now it is too late to try and spare you from the boring details, but the summary of it is this: who or what should be responsible for returning control from the JIT-frame to the caller frame is ultimately an issue of API design, specifically with regards to the 'meaning' of the return value of the JIT code. If the 'driver' is responsible, the return value must indicate whether the JIT code has 'finished'. If the JIT code is responsible, the return value must indicate whether the whole program has finished, instead. I'm strongly leaning towards the first of these, as the question 'is my own frame finished' seems a more 'local' answer than 'is the entire program finished'.

With that said, what can you expect of me the coming week? With object access and specialization guards complete, the next step is indeed calling to interpreted code from the JIT, which I have started yesterday. I should also get at argument passing, object creation, decontainerization, 'special conditionals', and many other features of MoarVM. The goal is to find 'compilation blockers', i.e., operations which can't be compiled yet but are common, and work through them to support ever greater segments of compiled code.

In the long run, there are other interesting things I want to do. As I mentioned a few posts earlier, I'd like to evolve the 'Jit Graph' - which is a linked list, for now - into a 'real' graph, ultimately to compile better bytecode. An important part of that is determining for any point in the code which variables are 'live' and used, and which are not. This will allow us to generate code to load important variables - e.g., the pointer input arguments buffer - temporarily in a register so that further instructions won't have to load it again. It will also allow us to avoid storing a computed value in a local if we know that it will be overwritten in the next instruction anyway (i.e., is temporary). Because copy-instructions are both very frequent and potentially very costly (because they access memory), eliminating them as best as possible should result in great speed advantages. Ultimately, this will also allow us to move more logic out of the architecture-specific parts and into the generic graph-manipulating parts, which should make the architecture-dependent parts simpler. I won't promise all this will be done in a single summer, but I do hope to be able to start with it.




Bart Wiegmans | brrt to the future | 2014-07-06 05:49:21

Well, let’s see here …

  • On-Stack-Replacement was merged by jnthn into master and seems pretty stable. It lets the bytecode specializer (and later the JIT compiler) kick in even when there are no invocations involved, like in a loop with many iterations.
  • There’s also improvements to inlining, especially stronger return handler elimination inside Rakudo.
  • Today, jnthn put a bunch of work into making MoarVM’s multithreading and async I/O stuff more robust. We’re regularly torture-testing a very simple asynchronous server using wrk (a tool like ab) to find race conditions and such. It’s not perfectly stable, but getting better.
  • Lizmat also made some async/threads fixes.
  • The JIT compiler for MoarVM learned how to handle lexicals (bindlex and getlex), simple attribute access for objects and “World Values” (most commonly compile-time constants and classes/subs)
  • A talk by jnthn on garbage collectors in general and the garbage collector of MoarVM in particular has made it onto the public ‘net. You can watch it on InfoQ.
  • There’s a few new benchmarks in the perl6-bench repository. As opposed to the big chunk of microbenchmarks we’ve had so far, these are a bit bigger. They were taken from RosettaCode.
  • Japhb also started working on a “history” comparison type for perl6-bench, but it doesn’t have an output functionality for html graphs.
  • Here’s a couple of days old benchmark run that compares rakudo and nqp with and without a recent fix to OSR and inlining to a 2014.06 released rakudo/nqp as well as perl5. This doesn’t include the JIT, unfortunately, and the difference is only really noticable in a few of the graphs (damn you, log-log scale!). I guess for next week I ought to make a somewhat more comprehensive benchmark with older versions and maybe even with the JIT.
  • FROGGS’ panda version with CPAN support can now extract .tar.gz files it pulls from the ‘net!
  • Someone added a bunch of examples to RosettaCode, but I couldn’t easily figure out which ones. But since RosettaCode is constantly growing anyway, it’s always a good time to check out what’s new!

I’m already looking forward to next week’s changes, there’s lots of stuff that still needs doing. For example, we’ve been avoiding the string (actually rope) rewrite for a long time and MoarVM’s performance for things like concatenation has suffered greatly for it.

Anyway, that’s it for now. I wish you a pleasant week :)


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-06-30 20:12:21

It’s been a little while since I wrote an update here, and with a spare moment post-teaching I figured it was a nice time to write one. I was lucky enough to end up being sent to the arctic (or, near enough) for this week’s work, meaning I’m writing this at 11pm, sat outside – and it’s decidedly still very light. It just doesn’t do dark at this time of year in these parts. Delightful.

Asynchronous Bits

In MoarVM and Rakudo 2014.05, basic support for asynchronous sockets landed. By now, they have also been ported to the JVM backend. In 2014.06, there were various improvements – especially with regard to cancellation and fixing a nasty race condition. Along the way, I also taught MoarVM about asynchronous timers, bringing time-based scheduling up to feature parity with the JVM backend. I then went a step further and added basic file watching support and some signals support; these two sadly didn’t yet make it to the JVM backend. On signals, I saw a cute lightning talk by lizmat using signal handling and phasers in loops to arrange Ctrl+C to exit a program once a loop had completed its current iteration.

While things basically work, they are not yet as stable as they need to be – as those folks implementing multi-threaded asynchronous web servers and then punishing them with various load-testing tools are discovering. So I’ll be working in the next weeks on hunting down the various bugs here. And once it’s stable, I may look into optimizations, depending on if they’re needed.

Various Rakudo fixes an optimizations

I’ve also done various assorted optimizations and fixes in Rakudo. They’re all over the map: optimizing for 1..100000 { } style loops into cheaper while loops, dealing with various small bugs reported in the ticket system, implementing a remaining missing form of colonpair syntax (:42nd meaning nd => 42), implementing Supply.on_demand for creating supplies out of…well…most things, optimizing push/unshift of single items…it goes on. I’ll keep trucking away at these sorts of things over the coming months; there’s some bugs I really want to nail.

MoarVM’s dynamic optimizer

A bunch of my recent and current work revolves around MoarVM’s dynamic bytecode optimizer, known as “spesh” (because its primary – though not only – strategy is to specialize bytecode by type). Spesh first appeared in the 2014.04 release of MoarVM. Since then, it’s improved in numerous ways. It now has a logging phase, where it gathers extra type information at a number of places in the code. After this, it checks if the type information is stable – which is often the case, as most potentially polymorphic code is monomorphic (or put another way, dynamic languages are mostly just eventually-static). Provided we did get consistent types recorded, then guard clauses are inserted (which cheaply check we really did get the expected type, and if not triggering deoptimization – falling back to the safe but slower bytecode). The types can them be assumed by the code that follows, allowing a bunch of optimizations to code that the initial specializer just couldn’t do much with.

Another important optimization spesh learned was optimizing dispatch based on the type information. By the time the code gets hot enough to specialize, multiple dispatch caches are primed. These, in combination with type information, are used to resolve many multiple dispatches, meaning that they become as cheap as single dispatches. Furthermore, if the callee has also been specialized – which is likely – then we can pick the appropriate specialization candidate right off, eliminating a bunch of duplicate guard checks. Everything described so far was on 2014.05.

So, what about 2014.06? Well, the 2014.05 work on dispatch – working out exactly what code we’re going to be running – was really paving the way for a much more significant optimization: inlining. 2014.06 thus brought basic support for inlining. It is mostly only capable of NQP code at the moment, but by 2014.07 it’ll be handling inlining the majority of basic ops in Rakudo that the static optimizer can’t already nail. Implementing inlining was tricky in places. I decided to go straight for the jugular and support multi-level inlining – that is, inlining things that also inline things. There are a bunch of places in Perl 6 this will be useful; for example, ?$some_int compiles to prefix:<?>($some_int), which in turn calls $some_int.Bool. That is implemented as nqp::p6bool(nqp::bool_I(self)). With inlining, we’ll be able to flatten away those calls. In fact, we’re just a small number of patches off that very case actually being handled.

The other thing that made it tricky to implementing inlining is that spesh is a speculative optimizer. It looks at what types it’s seeing, and optimizes assuming it will always get those types. Those optimizations include inlining. So, what if we’re inlining a couple of levels deep, are inside one of the inlined bits of code, and something happens that invalidates our assumptions? This triggers de-optimization. However, in the case of inlining, it has to go back and actually create the stack frames that it elided creating thanks to having applied inlining. This was a bit fiddly, but it’s done and was part of the 2014.06 release.

Another small but significant thing in 2014.06 is that we started optimizing some calls involving named parameters to pull the parameters out of the caller’s callsite by index. When the optimization applies, it saves doing any string comparisons whatsoever when handling named parameters.

So 2014.07 will make inlining able to cope better with Rakudo’s generated code, but anything else? Well, yes: I’m also working on OSR (On Stack Replacement). One problem today is that if the main body of the program is a hot loop doing thousands of iterations, we never actually get to specialize the loop code (because we only enter the code once, and so far it is repeated calls to a body of code that triggers optimization). This is especially an issue in benchmarks, but can certainly show up in real-life code too. OSR will allow us to detect such a hot looop exists in unoptimized code, go off and optimize it, and then replace the running code with the optimized version. Those following closely might wonder if this isn’t just a kind of inverse de-optimization, and that is exactly how I’ll implement it: just use the deopt table backwards to work out where to shove the program counter. Last but not least, I also plan to work on a range of optimizations to generic code written in roles, to take away the genericity as part of specialization. Given the grammar engine uses roles in various places, this should be a healthy optimization for parsing.

JIT Compilation for MoarVM

I’m not actually implementing this one; rather, I’m mentoring brrt++, who is working on it for his Google Summer Of Code project. My work on spesh was in no small part to enable a good JIT. Many of the optimizations that spesh does turn expensive operations with various checks into cheap operations that just go and grab or store data, and thus should be nicely expressible in machine code. The JIT that brrt is working on goes from the graph produced by spesh. It “just” turns the graph into machine code, rather than improved bytecode. Of course, that’s still a good bit of work, especially given de-optimization has to be factored into the whole thing too. Still, progress is good, and I expect the 2014.08 MoarVM release will include the fruits of brrt’s hard work.


Jonathan Worthington | 6guts | 2014-06-25 22:20:06

Hi!

I have just submitted my midterm evaluation questionnaire! This part of Google Summer of Code was really great, I learnt a lot. I want to write here about my progress in the project.

What did I do during the first part of Google Summer of Code?

Here you can find my previous posts about GSoC:

What's new?

The important thing is that I merged all the HTTP::* repos into one, HTTP::UserAgent, you can still find old repos, I just wanted to keep the commits history.

After this period, a simple HTTP client is available, just:

    use HTTP::UserAgent :simple;

    my $content = get "filip.sergot.pl";
    say $content;

or:

    getprint "filip.sergot.pl";

That's how you can print the sourcecode of a website.

We have also a prototype of more complex UserAgent working.

    use HTTP::UserAgent;

    my $ua = HTTP::UserAgent.new( :useragent('firefox_linux') );
    say $ua.get('http://ua.offensivecoder.com/').content;

Wait! But what is the 'firefox_linux' there? And here the HTTP::UserAgent::Common comes, providing the list of most commonly used User-Agents. It is built according to this article:

    chrome_w7_64   => 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36
                       (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36',

    firefox_w7_64  => 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0)
                       Gecko/20100101 Firefox/29.0',

    ie_w7_64       => 'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0;
                       rv:11.0) like Gecko',

    chrome_w81_64  => 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36
                       (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36',

    firefox_w81_64 => 'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:29.0)
                       Gecko/20100101 Firefox/29.0',

    mob_safari_osx => 'Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X)
                       AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201
                       Safari/9537.53',

    safari_osx     => 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2)
                       AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3
                       Safari/537.75.14',

    chrome_osx     => 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2)
                       AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131
                       Safari/537.36',

    firefox_linux  => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0)
                       Gecko/20100101 Firefox/29.0',

    chrome_linux   => 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36
                       (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36',

So we don't have to remember all those crazy names in User-Agent header content, anyway, we are still able to write what we want.

What's next?

Here is the plan for the next part of GSoC:

  • HTTP::UserAgent: implement handling cookies
  • implement TLS support
  • write rich spectests
  • write wide, rich documentation

Filip Sergot | filip.sergot.pl | 2014-06-25 00:00:00

I’m sorry for keeping you waiting yet again; the GulaschProgrammierNacht 14 kept me busy and after it was done I was as tired as I haven’t been for a long time ;)

Here’s what’s been going on:

  • FROGGS is continuously working on the CPAN support for panda. His code can query and fetch distributions from CPAN already, and decompress the .gz part of the .tar.gz, but the Archive::Tar module still needs to be finished.
  • jnthn added the very first steps towards an On-Stack-Replacement operation. OSR is required to optimize hot for loops, for example. Usually, we trigger optimization when a function is called often enough, but with such a for loop, all we do is jump back over the same code over and over. Instead of “jump to this different function”, OSR has to directly jump over into optimized bytecode (or jitted bytecode) and make sure variables on the stack and exception handlers and all those things are kept in order.
  • The YAPC::NA (“Yet Another Perl Conference, North America”) is currently happening. As far as I’ve heard, there are only two Perl 6 related talks. Find the complete list of talks on the official website. A bunch of videos have already been uploaded to youtube at the time of writing. Find them in the YAPC NA Youtube Account. The Perl Six Youtube account also has a playlist of english perl 6 related talks that contains the YAPC::NA talks.
  • lizmat, with a bit of help from jnthn, optimized say and note if they are called with a single Str argument.
  • lizmat also worked more on the CompUnitRepo stuff in Rakudo as well as CPAN, implementing more and more stuff from S11.
  • lizmat implemented an :initial argument for my zip_latest Supply combinator. It helps in use cases where you want to get a combination of values from all supplies even if not all supplies have supplied their very first value yet.
  • jnthn fixed the behavior of the “item” sub. the “item” method has always been correct, however.
  • FROGGS implemented subbuf-rw and the “a” and “Z” directives to the unpack method. I’m guessing this is in order to satisfy the needs of Archive::Tar.
  • we updated parrot’s required revision to version 6.5.0, giving us nice things like unicode database fixes and faster core PMC classes.
  • carbin fixed the default file mode in MoarVM’s file ops to be 0666.
  • Mouq, smls and teodozjan did a bunch of work on doc.perl6.org; Among others, they implemented a design overhaul that had been made by another member of the community some time ago. It’s easier to find one’s way around the docs now and there’s more content!
  • japhb added a bunch of new bigger benchmarks to perl6-bench and also started implementing “history” comparisons among other things. Those show the timings of single benchmarks in different revisions of a backend as one line.
  • dwarring fixed and created more tests for the advent calendar articles.
  • masak wrote up an interesting article and with a bunch of code on his blog. It discusses and solves a problem called “Boxes and Pebbles”.
  • softmoth improved the look of modules.perl6.org.
  • bebus reported being able to run Perl 6 on a Nexus 4 android phone!

Another thing that’s happened is that the Rakudo 2014.06 compiler release got out. There’s the Rakudo changelog, the NQP changelog and the MoarVM changelog. The Parrot changelog can be found on the Parrot website, which also contains the latest progress report of the Parrot GSoC student.

Here’s a bit of an update for the GSoC projects:

  • This week, mentors are submitting evaluations to Google for their students. I expect all three students to be allowed to continue; I’ve personally been pleased with their work!
  • brrt just pushed support for passing and returning floating point values to and from methods and then added some arithmetic operations. I didn’t run tests myself, but the fibonacci example supposedly runs 2.5x faster with the JIT than without. A pretty good start!
  • filip sergot built a binary “http-download” that is a very thin frontend to HTTP::UserAgent that can handle encodings like the chunked transfer encoding. Of course you can do your own requests rather than just downloads with that module.

There’s also a low-hanging fruit for you to try to fix:

  • MoarVM can be compiled with “computed goto” or without. On GCC, we could turn it on unconditionally and it’d result in a very nice speed boost, but our build system (more correctly: MoarVM’s Configure.pl) doesn’t do that yet. Should be fairly easy to fix and be quite helpful. EDIT: FROGGS did this just now.

Special thanks to raiph who pointed out a whole bunch of items for this week’s post that I’d have otherwise missed! Hope you’ll have a nice rest-of-week :)

- Timo


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-06-24 18:24:37

2014-04-27, TEDxTaipei, https://www.youtube.com/watch?v=E6xIcKTRZ00
Hazel (Moderator), Linda (RailsGirls), Charles (JRuby), Matz (Ruby), Audrey.


Hazel: Now, we go for the first question. Are you all ready? The first question is "Today, working independently is no longer popular anymore. Team cooperation has become a trend in the age of cross-discipline working."

“What we learn from programming — will it help us to do a better job in real life? Maybe  — for the males and for females — for everybody."

Linda: Hello. I suppose that when people told me that learning to program teaches you how to think, I didn't understand it in the beginning. On learning, I've worked with engineering teams a lot. It really helps you structure it in a way that it needed for the engineering to work on problems.

For instance, I would come up with a feature, and they'd go, "Hey, OK, let's do this." And by being able to understand how the code works, or how that product is built, or what kind of features are feasible to build, and so forth, even though I didn't work as an engineer, helped me work better and more efficiently.

[laughter and applause]

Charles: It's interesting, because just in the past few weeks, my wife has decided to start to learn how to program. She's very excited about the possibilities. She was a little frightened at first, of the difficulties she might run into. But, for years, she's been the primary chef in the house. She's taken care of the kids. She's knitted. She's done other crafts and projects,

What she started to see is that what she's learning in programming fits very well with other methodical, recipe-type systems, and building of crafts, building of a meal, things like that, as tying into the rest of her life, the more that she picks up programming. It's been exciting to see that.

[laughter]

Audrey: Since Linda and Charles have addressed programming’s importance so well, I’d like to talk a little bit about the team working part. I think part of being an audience —  as we know — being an audience is the ability to listen.

I think a lot of the experience in programming nowadays online, whether it happens on GitHub or other social events, is the ability to listen, or to perceive, or see each other's viewpoints. We see that on GitHub issues, we see that on mailing lists, we see that on IRC, we see that on wikis.

I think, those taken together is much more important than code itself. As we would see that code itself, as why the lucky stiff re-tweeted, that code never lasts long anyway. It's always replaced by something new.

But human memories, the shards of the souls that we share together, those may not be as precise as code, but they outlast code. Community, people, their relationships, their links, they outlast code.

Code is this wonderful opportunity as an anchor for us to learn to listen to each other, such that we can make something more beautiful and that's objectively there. It's like an artifact, a pyramid or something that we could see, growing day to day objectively, as a result of our being generous to each other.

I think being able to listen, and to be able to give back is the most important thing for us to learn as part of programming in the teamwork that's the future. Thank you.

[applause]

Matz: I think I have to say something.

[laughter]

Matz: Despite the original question, in the past, the creating serious software is not this independent work, so they couldn't create the serious software alone. They have to work in the same company and in the same project, then working together as a team, maybe in the hundreds of programmers to create the systems like IBM’s System/360 or something.

But there are technologies that change the situation, like, for example, Ruby. Ruby was started by me alone as an amateur programming language designer. I created the first snowball and put it into the Internet, so that everyone gathers together. Then we work together over the Internet.

Ruby's getting better and bigger, the committee has grown, and JRuby has been created, and then Rubinius, and so many projects with Rails, or Sinatra, or even other communities like the RailsGirls, that sort of thing. That means we can be more flexible using technology.

We can work alone, the individual can start great things using the power of the Internet. At the same time, using — say community, Internet, or even GitHub —we can be socialized to form the power to create great thing by programming.

I believe that is the flexibility, like from individual to huge community, like millions of people. We can choose, not by the company, not by the project, not by the organization or something. That kind of flexibility is the power of the 21st-century I believe.

Charles: One more thought. It occurred to me that one of the biggest advantages that you'll get out of learning how to program, is the fact that you're going to be able to cooperate, and understand, and work with other programmers, which there are more and more of us.

There's more software being created every day. It runs the world, and being able to be part of the world, I think means that almost everybody should learn how to do some programming. You've got to understand how the world around you works. It also, as Matz was talking about starting Ruby, you don't know if that piece of code that you write tomorrow, that one project that you build, might be the seed that grows into a forest.

The project that Matz started years ago is why we're all here, why we have wonderful events, why we're creating such amazing applications. It could be any one of you. You could be the next one to create something that grows into that forest of a wonderful programmed world.

[applause]

Hazel: Could you provide some advice for people want to promote programming, to avoid a situation of this [gender exclusion]. May you, Linda, can share some ideas about that. What can you think effectively reduce these uncomfortable situations like this? Maybe in the case, or your ideas, please for Matz, and for Charles, maybe you can share what you see in this IT industry.

Audrey: Hazel refers to a private conversation we had before the conference, and the basic point was that Taiwan, which is culturally influenced both by the Japanese stereotype of women, and also by the confusion Chinese treatment of women.

There is this sense of a politeness that's built-in, not only into women, but into everybody, so that when we are joining a community, we tend to reach a safe point where we think that we establish some sort of sympathy, or empathy, or understanding with the group, before we even start to speak, or even we start to raise our hand. This is a very East Asian thing.

But in particular for women, if they see a whole space composed of men, or of people who are gender expressions that differ so much from theirs, it's very difficult to establish this kind of rapport, or mutual support sense, before starting to join, or start participation. That's a real entry barrier, as Hazel mentioned herself. It's an artifact of the community's composition, and the culture of things in Taiwan, or in East Asia.

It's not specific to women alone. As for how to fix this, well some people do that by sheer obliviousness, like to their social scripts. They are like, "Well, we shall dive in right here."

[laughter]

Audrey: When they just jump into a community and start asking stupid questions, some people would say, "Hey, why are you here? You're out of line there, right?" But then [laughs] after a while, they start to become part of the community, and that's actually the fastest way.

As Matz actually implied, the fastest way to get a community going is by releasing something imperfect. It's about like posting your question, and then answering it in a very bad way. A lot of you would join saying, "Hey, there's a better way to do that."

So people who are oblivious to this kind of social training, they could actually enter that kind of online technical community much easier, and with much less difficulty — even after a lot of argument ands fighting — than people who are polite, and than people who then shift into some other community who are more friendly, like making digital art, or things like that.

Actually, a suggestion I would make is to be strategically oblivious. [laughs] Just to make some headway into it, pretending that you're already part of the group, and that's a self-fulfilling prophecy, and you will become part of the group in no time.

Linda: I segue into it with a very personal experience. I wasn't a professional programmer, and still I'm not a professional programmer, besides so I was absolutely oblivious to all of that life, all of that trauma, and all of the drama that surrounded the female in technology, and that sorts of problems.

I just wanted something that helps me learn more programming. I didn't know that the word “girls” would offend many Americans, and I’d like to think I was able to build this platform because I’m not a native speaker, I didn't know that they're supposed to teach programming in this manner or that manner.

There were so many things that we didn't know, which afforded all of us to experiment, and do things and not worry too much about what happened.

I totally agree with the thought that, that the best way is to barge into a community and start asking questions, but I come from a culture that it also very important to be like others. Finnish people tend to be relatively silent and observe, than raise questions. One of the things that I deliberately wanted to have in the RailsGirls workshops was some sort of cultural section, where we talk about people — like who are the people in the Rails community.

We talk about Matz, we talk about DHH , we talk about _why, and we talk about the FridayHug. We talk about all of these other institutions and things that I'm quoting, because it's not only about the code.

Then we encourage people to become a part of their local group, and coming to these events, and have the self assurances that, "OK, I know enough to be able to go into a meet-up, and be a part of something bigger. I'm probably still not there technically, but I’d love to see DHH again.

[laughs]

Charles: One of the things, I wanted to make sure, it's been said in various ways all day today, but you ever feel like you're being excluded, or singled out, just always remember, it's not your fault.

Everybody's going to face that, not just the programming community. Ask any over-40-years-old programmers how welcome they feel in Silicon Valley, San Francisco, and it's that sort of thing. Look, it's not your fault, and remember that the reason we have events like this, and the reason this has become such a major issue, a feature point of discussion, is because we're trying to fix it.

There are resources that will help you avoid that exclusion, that ostracism from the community. Just keep fighting through it, and we'll try and help you as much as we can along the way.

[laughter]

Matz: Yeah, it's on. Yeah, just for a little more.

Matz: [laughs] In CRuby, we had say 90-something core contributors who had the privilege to commit to the central repository. Some of them are Japanese, some of them are living in the United States, and some of them are European. I don't know who any of the Taiwanese or the Chinese. Unfortunately, I know no female contributor yet, but I'm pretty expecting.

Actually, I don't care about this aspect, that gender, and nationalities, and age, and titles, anything. We had very young contributors like 18, or something, and very old contributors like 50s, 60s. But actually, I don't care. As Audrey said, we want to value people by their values. I think that being fair, and that don't care too much about the other attributes is that crucial for the community management.

Our community has a very good tradition to be nice, so the Ruby community is known to be nice people. As Linda said, the community, or the open source, or even Internet, is a human thing, so we are human, and we have hearts. The communication between hearts is the very important even in the software programming, software creation, or anything.

Charles: Sorry, just one quick inspirational story. I went to a conference in Tunisia in 2013, and it was about 100 people for a Java event. Very first time, done at a technical university in Tunis, and the conference, and the University had a majority of women. It was the first time I'd ever seen that anywhere in the world, and I was amazed by it.

But they were really excited, and they were pretty much in charge there. [laughs] They were running that thing. But it was just great to see that there are parts of the world where they were already getting that going, and starting to get more women involved in technology. I'm just thrilled that it's happening here, and happening around the world.

Thank you to Linda for arranging this, and for the Rails Bridge folks for doing the sessions they're doing in the US. It's really an exciting time, and I'm glad that there are so many more interesting, wonderful programmers entering the community now.

Linda: Yesterday in RubyConf Taiwan, there was a lot of RailsgGrls alumni who participated, and volunteered over there, and helped organize the event, and I think it's almost like a fairytale that all of the sudden we would have hundreds of women taking part as speakers in conferences.

But I do wish that all of you who volunteered yesterday will keep on programming, and next year you will probably give a talk over there, and be there, and we will have more women as speakers, or so in conferences.

Hazel: RailsGirls hosted this event, so let's talk about the RailsGirls community. According to your observation, what are the factors in this community that encourage female to access programming?

Linda: I think that might have a lot of broadening up, because RailsGirls, again was a very personal thing to teach myself programming, and it's definitely not a panacea to every single female, like getting more females in programming world, and as Charles mentioned, there's a lot of organizations that are doing wonderful work for this in very different ways. Can repeat the question? What was that?

[laughs]

Hazel: Maybe we can change the question about what is the factors? Linda: Yeah, what are the factors in general in bringing more females to programming? As I mentioned in my talk, for me it was the practical application, the expressing myself and the creative side of things that initially gave me that aha moment, and I think there's almost two types of click moments in programming:  There's the very tangible moment when you see something come alive on the screen, and like, "Oh wow, I made that?" Then there's the more intellectual pleasure of seeing something like beautiful code that is formulated, and getting that "Whoa," an intellectual aha moment.

Sometimes our schooling system aims for the latter one, where we learn about arrays for a long time, and before having that tangible moment of aha. Maybe one way of getting more women involved in general is to have more of those first moments.

Audrey: To extend the analogy, I'd like to introduce the question, "Why do people write poetry?" People write poetry because they feel very strongly about something. Because of that, a lot of teenagers write poetry, because they feel very strongly about something.

[laughter]

Audrey: Only a few of us continue to write after our teenage. But in any case, that was the spark. That was the first moment. If you, whatever your gender or age, if you start caring very much about something, there's got to be way that programming is going to be helpful to make that happen.

As a rule, either there is a way to reduce your stress in automating some of the tasks, or as a way to get your message across, or as a way to get more community around you, or to get better equipment so that you can do whatever you care about more efficiently. There's going to be some way that programming, like a good mastery of language, is going to help you to communicate to other people.

And that’s Linda’s second point, when you see that the poetry you write touched another person’s heart. They come to you and say, "Hey, I read your poem, and it touched me very much, and I was crying," or something — just by reading your poem. Then you get the sense of accomplishment, of having touched another human being with something you created.

It's actually very easy to do, especially with web programming nowadays, so that's another thing that one can focus on in getting your message across, not only with the existing web systems like Twitters, or Facebook, or something, but with something that you design yourself — even though it's with iframes and entry level CSS — because it has an impact, because it is you; it is a part of you. It's part of your soul, and not just some post on some blog system, or on some social network system. Thank you.

[applause]

Charles: I'd say one of the biggest factors that is going to make it easier for women to enter the programming world is exactly what you all are doing, having more women in the community, more women that can identify with new programmers that are coming to the community.

You're helping lay the foundation of all future generations of women programmers. You're helping open that door, and make it more attractive, make it more comfortable for women in the future to become programmers.

Don't forget that it's an amazing thing that you're doing, for all those future generations that would have had the same trouble that people had 10 years ago, or 20 years ago, trying to get into the sort of field, so just keep up what you're doing, because you're really helping. You're helping all the women in the world at this point.

Matz: I have never written a poem in my life, but according to the old saying, the poem is the outcome from the passion inside. If it's true, the Ruby itself is my poem. There are 600,000 lines of [laughs] C-code poem that's used.

[applause]

Charles: Yeah, an epic poem.

[laughter]

Audrey: It is also very poignant.

Matz: But anyway, the primary motivation behind the creation of Ruby is the passion. My passion of the love to the programming language. Loving programming language sounds weird to most of the people, but I can't help it. [laughs] Since I was in high school, I loved programming language, so the very important thing is the following passion.

Maybe so you ladies, girls and boys in the field, somewhat passion to create something, and then you see the screen that your software writing on, so you feel something good. That is the passion that you start with, so that passion brings you to be better programmer, better creator, or even artist.

If I say something, so follow your passion, and then nourish your passion. That's a very important things to be a better person maybe.

Hazel: Please, everyone give the applause for all four of them first.

[applause]

Hazel: This is really, really exciting, right? I know many of you sitting on the seats have not attended a programming course, or the coding process before. I want to ask a question. Do you want to learn programming? If you want, please raise your hand. Raise your hand higher, higher. [laughter] . Please, OK. Please don’t put your hand down. Please, we'll hire you soon.

[laughter]

Hazel: Is there any programmer right here want to teach all of them, please raise your hand. Wow, see, there is too many hands right in the air. I think this is really possible. If you are really want to involve in this community, you want to learning, just ask them to teach you, and maybe like the RailsGirls, you can establish your own community. Maybe in a different language, a different city, different area, that's all possible.

Last, I want all of you please give some words to the audience, sitting on the seats, to encourage them to pursue learning programming, or maybe they want to do something else. Could you give some words?

Audrey: I'd like to invite you to be yourself. There's a lot of going on in Taiwan where you see on the magazines, or on the books, that have to be as successful, or as interesting, or as fashionable as some person that appears on the cover of a magazine, or a book. There's all sort of this kind of going on. I'm sure it's a worldwide thing.

But I'd like to encourage you to think that if you follow the ways outlined in those magazines and those books, the best you could do is to just be a very good copy, or even a better copy of someone else. But that's not something irreplaceable. That's not something authentic, and that's not something that's authentic to you.

I guess the only way, for me at least, to follow your passion, is to think back of what are your unique experience that makes you care about the things you care, that makes you feel the things the way you feel, and then from it discover a ground for you to be authentic with yourself, and without exception, I think passions and compassion will follow from that. Thank you.

[applause]

Charles: I love to read sci-fi novels, and fantasy novels, but I still love to watch the Hollywood movies, science fiction, but in reality we have no ESP, and we have no magical power like Spiderman or Superman, but right now though, we can control the computers.

We can control the network, so we can communicate to the people all over the world in a second, the speed of light. That's a kind of magical power. I believe the programming is the magical power that enables the imagination to be real, so learning, and having ability to program computers, is to order computers to use some magical things. Learn program, be a magician, be a wizard, and be happier.

[applause]

Linda: I'm still pretty much figuring out what I want to be when I grow up, and what I want to be doing.

[laughter]

Linda: I've had the exact same idea. I've gone through all of my life and tried to figure out what are the points that made me who I am today, and the things that I do what I do. The first of them is from age eight, I think. Then I run into this quote from Steve Jobs, who was talking about connecting the dots. How you can connect dots only looking backwards, not looking forward.

Then I looked at the sky, and I don’t know if any of you know who came up with constellations, like the shapes that stars form, but it wasn’t scientists, it wasn’t engineers, it was the storytellers.

The people who wanted to make sense of the sky by drawing pictures in there, and calling, “This is an owl, and this is a donkey.” In the same manner, I've been trying to figure out what are the individual dots in my life, and what kind of a picture they form.

Those pictures can change throughout time, and there might be different kinds of connections, but it's important to have those dots in the first place, and to start thinking about what they form up. It's as you said, very individual, and very unique, and it shouldn't be something that you just copy from someone else.

[applause]

Matz: I'm a little biased, but honestly, I believe that being able program is the most powerful skill that a person can have. It requires essentially no resources. It helps to have a computer, but essentially it's all just from your mind. It's what you can create.

Anything you can imagine, you can create, and you don't have to have anything but time, and effort, and energy to do it. Once you start to get into this, it's almost like a drug. You're going to feel how powerful you can be, and how much you can do with programming. Get through the early tough stages, because it's a great ride, and it's really exciting.

[applause]

Hazel: OK. Thank you for you. I received some questions from the audience, but before we answer the questions. Are there any more questions that you want to ask, are there any notes you want to pass it to the stage, is there anyone or this is all the questions?

Hazel: Is there anyone? No? Let's start the Q&A panel. I think this question is for the programmers to answer. What makes you want to push girls to attend this event, and what impact do you think can make difference to girls who are involved in this event?

Audrey: The question was, really, I think, about what impact that we think that people who are involved in this event, what kind of differences those events make to the women's lives who attend these events. That's a very good question, actually.

When we talk about pushing someone to compel themselves into taking up an important social task, the way we do it is with finesse. It's about raising something, a spark that kindles in them something they already care about, but they felt that it’s helpless, maybe because they believe that they're the only person on earth who cares about this issue, or maybe they believe that the system is too large, it's too immutable, people cannot change just by themselves, and things like that.

I think programming in itself is a way to empower people, to see that there are millions of people in the world who put in maybe just five minutes a day into something. Or if you're really addicted, 15 hours a day into something…

[laughter]

Audrey: …it visibly makes the world better. I think that impacts a person's life, empowers them in ways very few other fields that could provide.

Charles: I'd say I have selfish reasons. Pretty much every programmer I've ever met has taught me something. If women are not part of that community then there are things I'm not learning. I want everybody to be part of this community, so that I have a chance to meet and talk with you about programming some day.

It all goes around. The community can't work without the community. It has to be filled with lots of different people, lots of different ideas and different ways of looking at things. It's not even just for you. I think it's absolutely crucial to the programming world, the IT and tech world, to bring more minds in. This is a great way to do it.

Linda: For the RailsGirls event, we oftentimes say that you don't learn much about programming, per se, in one weekend and especially using Rails. But you do get to meet the coaches, so you do get a real connection with a real programmer, and then you get to meet all the other women who are as excited about technology as you.

Here in Taiwan you see a lot of women in events. But we've had events in Egypt or Cairo or Azerbaijan, where they just don't even know other women who exist who are excited about this stuff. It's a very powerful thing to fashion, to meet the people.

Matz: The motivation and the background is up to each individual, like to gain new knowledge or to improve their income by learning programming. But no matter which motivation you’re behind, I really want you to understand the programming itself is pretty enjoyable. It's fun. I have been programming for more than 30 years. I have never been tired of it. It's very exciting. I often forget to eat, I forget to sleep.

[laughter and applause]

Matz: Yeah, it's fun, that much. I want you to understand the fun, the joy. Well, plus, you have your individual motivation, and plus knowing that fun will improve and even enhance your individual motivation.

Hazel: After the first question, here comes the second. It's also related to the first one. Here is a person who is working using the marketing industry. She wants to ask how can learning programming help for his or her real life?

I think, maybe, this question, we should ask the RailsGirls attendee, right here. Do any RailsGirls want to answer this question? Any RailsGirls? Oops. I think Linda has a lot of experience about this.

Linda: Let me see, marketing people, they run email campaigns. Maybe you can do a dashboard that showcases the analytics of your email campaigns, and that communicates better to your boss how important these things are.

Maybe you need to order a new campaign site and you have a programmer and the programmer says, “This is impossible. You can't do this,” so forth. Then you're like, “Yeah, bullshit. You can do this.”

[laughter]

Linda: Stuff like that. There's a lot of really tangible and real things that you can do in your industry. Any other brainstorming? I have never worked as a marketer.

Audrey: I'm going to talk a little bit more philosophically. Marketing is about getting message across to another person such that they may wish to exchange something that they have, with what you have so that you both become better. This is the fundamental of marketing.

Traditionally there are three kinds of exchanges or marketing behavior that we are used to. One is that this in-group, like maybe we’re in a family or maybe we’re in a “community” that has an in-group and an out-group.

Members in the family, or in such in-groups, the share everything, they exchange everything with everything, but they don't share with outsiders like 非我族類 (“aliens”) or something. This is one kind of exchange.

The second kind of exchange is what we see in a government or in a hierarchy where we only exchange with the upper ladder or the downward ladder. Like, I only report to my manager, my manager reports to their manager, and so on, so that the exchange of information is entirely hierarchical.

The third one is that we exchange with whoever with the cash, who has the money. We offer our service or our goods to people who have money, so we use that money to exchange with someone else, to other marketers who sells us things. We basically exchange through currency.

These are the three dominant exchange models in the world.

But by participating — as a marketer — into open source, like the Ruby community, you're going to learn the fourth exchange model in the world. That is, you freely exchange with anyone in the world for whatever purpose whatsoever.

This is an extremely revolutionary idea: I don’t care about whether you're in the same ethnic group as me, I don't care whether you’re Taiwanese or not. I don't care whether you’re my boss or my manager, and I don’t care whether you have the cash. I'm going to offer my service and my generosity to you.

This kind of marketing, as we proved, like Linda’s Kickstarter campaign, reaches more people in shorter time more efficiently than any of those three legacy, old exchange models. That's going to be the trend of the 21st century. By participating in an open source community, you're going to see firsthand how this works, and how to make it work for you in real time.

[applause]

Matz: I used to work as the professional programmer, I still am. But I work for the company and I order to develop software in the team. In that time the many things out of control me, so the boss decides something, that you have use this tool, you have to use this language or something like that. But it's bullshit.

[laughter]

Matz: Now I'm working on open source software, mostly because it enhances my control. I can decide by myself what project I work on and I can decide which technology I use. I can decide what I do today in a certain degree much better than I used to. I think one of the joys from programming is the having power, and having control. Of course, the power comes with responsibility.

Hazel: Thank you. Well, but here is a caution about the female programming popularity. If the female programmer community is getting bigger and bigger, do they have any influence to the marketing of the programming industry?

Linda: I was just writing a report on this subject. The first professional programmers in the world were in the second World War, and there were a lot of females operating computers and calculating ballistic things, and so forth — Audrey might know more about the history of this — at that time they were doing a service to their country.

Then the next generation in the ‘60s was females who were operating computers or programming computers, because the men felt that it's a stupid manual labor thing. That's why women do it, the same way they operate telephones and so forth. But the women secretly realized that programming is really powerful, and they became better and better and better at it. It was like the Bell Labs.

I don't remember the name of the computer anymore, but they were working on this computer and the whole image of programming being male was really crafted in the ‘60s. Because the male wanted to get back the programming industry.

The requirements they used to get people into programming positions were crafted so that only young men would fit them, and very artificially done, this whole movement. Before that it was a women's profession, for the better or the worse, because it wasn't valued by the society at the time. But maybe Audrey knows more about that.

Audrey: Actually, Linda said pretty much everything there is to say about the history in United States. I think the marketing of teaching and applying and doing programming, it's going to be very much distributed.

Because 20 years ago, even, we have this idea of a larval stage — 結蛹期. It’s part of the hacker dictionary, the Jargon files. It says, basically, that to become a professional programmer, a hacker, you have to spend three or four years of your time addicted to your computer, totally breaking your sleep patterns and working 20 hour shifts. Then you will, at one point, reach enlightenment. This is a lot like Zen Buddhism.

[laughter]

Audrey: Once you reach that point, once you reach the point of 零的轉移、巫術的權勢 (the point of Zero transference — the power of wizardry), basically you become a wizard. Once you become a wizard, the distinctions — like Matz said, of gender, of age, of nationality, of ethnicity — they just disappear. It's like the scene in The Matrix, where Neo sees everything as green digits.

[laughter]

Audrey: Once you hit that stage, nothing else really affects your objective judgment. That’s also a very Zen Buddhism thing. But I think that’s partially a myth, because it was so difficult to learn about programming without the Internet community at that time.

Now with RailsGirls and communities like that, we have a slope. You can very comfortably stay on any point in the slope, with a lot of people in the same ladder to support each other, and you don't need to spend two or three years of your life.

This way, you can spread it through five years or six years — you can even sleep eight hours a day without falling back. I think that's going to change the market very much, because then instead of just amateurs and professionals, we're going to have market segments for every point in the ladder, and that's going to make the market and the community much larger.

[applause]

Hazel: Next question: What is your first entry into programming?

Charles: I don't really know when, like when I was six or seven and I learned how I could use the computer, I was immediately trying to figure out how to make it do more things than it did right then.

But over the years the things that have really inspired me to keep going is, first of all, the power rush is nice. But being able to make other people happy, make their lives easier by writing software that helps them.

I work on JRuby as a passion, because I hear stories from people that use our software, and love it, and they're happy and their lives are better as a result. That's what's really kept me going, and inspired me to continue to be a programmer, and try to get everybody that know to be a programmer as well. Because it just brings so much to other people's lives as well.

[speakers pass the mic around]

Audrey: This is like passing the baton, right?

Audrey: I remember my first entry into programming was when I was seven, and I got this really old book. Matz actually just told me, in private, that he had this really small town library where there is a book about the Ada programming language.

There weren't many programming language books, and was an Ada programming reference. He just read it from cover to cover. Very unfortunately for me, my book was about GW-Basic.

[laughter and applause]

Audrey: Yeah, if it had been Ada, maybe I would be a better programmer. But in any case, I read it from cover to cover. But I didn’t have any computers or I haven't seen any computers at that time.

What I did was I took a piece of paper, I started writing on it, drawing keyboards. I started pressing the paper. I started pressing the keys and writing the alphabets that would come after the command line. Then I still remember the touch of my fingers on the face when I type 10, space, RANDOMIZE TIMER, which is what you do is GW-Basic. I have this etched in my muscle memory before I met my first computer.

But that was a defining point, because it shows that computing is not about computers, it's about a certain way of thinking. It's about a certain way — if you organize your thought in a certain way, it yields predictable results, and it yields something that you can show to other people, so they can replicate your results. This is actually the scientific paradigm. This is like what a person without any scientific equipment whatsoever, they could just figure out, by an old GW-Basic book, the scientific method for themselves. For me, that's was the defining point.

Matz: Well, I had a similar experience. I was a little bit older when I met computer, I was 15. Soon after that, the computer runs BASIC, a reduced set of it, the language was very limited. It does only 4k memory or something. The BASIC was very strict, it has only one character variable, that means that you can have only 26 variables. That's kind of frustrating.

In the bookstore, I found a book about Pascal, so I read through the book of Pascal from cover to cover. Then I realized that there are several, there are many programming languages out there. Computer programming languages are different from each other, so someone must have created those programming languages with some intention.

At that time, somehow, I got an idea, that someone created the programming language for their purpose, so why not me?

Since then, since that idea struck my brain, I became very, very interested in programming languages. No matter what kind of program — I don’t care, I care about the programming language.

So my other friends wanted to program to, say, create games or make money or something, but I don’t care. I care about the medium, not the purpose. I read through the Ada book, Pascal books, Modula, Lisp, some other programming languages.

But I didn't have a computer to create a programming language. I had no knowledge about the compiler or interpreter, so I took my notebook, and I wrote down the program in my idea of programming language. You don't need programming skill to design a programming language.

Unfortunately, I lost that notebook, it’s really shame. I don’t remember anything. I believe that was something in between Pascal and LISP.

Actually, I didn't have friends who knew computers, in my high school age. I went to the university. I met some people who loved programming. At that time, I found that very few people care about programming languages. Then, I studied about computer science. I learned how to write compilers. Then, gradually, I created Ruby. Gradually, it took over the world.

[laughter]

Matz: The idea of a programming language was a very enlightening idea for me, at my high school age.

Linda: [laughs] I told about the Al Gore story already. [laughs] Defining moment. More recently, I went to the bookstore. Before I made the Ruby thing, I tried to look for books that would explain for kids how computers work.

I would find tons of books that talked about astronomy, like how to be an astronomer or how does a combustion engine work, aimed at [laughs] kids but none that would explain how computers work. That was an "Aha" moment for me, that this body of work or this material around software engineering needs to exist, and maybe I need to be the person who does it. I loved Audrey’s paper computer example.

One of the things I want to do is do a little origami paper computer that the kids can assemble themselves, put in the little CPU, and have a very tangible feeling about having a real computer, and their first paper computer. As you said, computing is not about the actual hardware or anything like that, but that experience of owning it.

Charles: Stay passionate about programming is to look at things that you would deal with every day and find a way to solve it through programming. Raising kids, there's a million and one things that you could use a program to help you manage it, sleep schedules, meals, or whatever. All sorts of things that you could do.

[laughter]

Charles: The other thing is to remember that, of all of the abilities that we have as humans, being able to program, being able to design programs, has probably some of the fewest demands on you. It really just needs time and a little bit of energy, which, of course, when you're raising kids, those are the two things you really don't have any more.

But as long as you're able to find just a few minutes in the day to keep moving forward, build things around your life, around your passions, kids and stuff in the house, if it gets to that point. You'll be able to keep going. You'll be able to keep going.

I can't imagine programming not being part of my life anymore. Even through the most difficult times. I've had to take some breaks sometimes, I've had to go away. I've gotten burned out on the projects that I'm working on, upset by things that people say to me or say about my programs, about my projects. But I've always come back.

I don't know anybody who has been a programmer that wasn't a programmer forever, in some way. It changes you, and I think it stays with you for the rest of your life.

Linda: A practical example, my friend made a little Arduino clock that connects to her Fitbit, and it shows like a little screen, it shows how many steps away from home she is for her little kids, all the time. Projects like that maybe, it might be helpful in just kindling that passion.

I want to quote our practical German friends. I've talked to a lot of people about their motivations in taking part in RailsGirls and so forth all around the world. The German girls, one of them approaches me and said that “Programming is the most flexible work you have. It's well paid, you can do it at home with the kids. You can do it in the evening, you can do it in the morning.” It allows them to be very self-sufficient, and that's why they want to change careers.

[applause]

Matz: The ladies are very easy to distract away from the programming, or even careers. Mostly because the social pressure, and the psychological “mind-set” or something. I declare it's OK to have passion on programming, on your career. Even you have to care about your family, your children, but you can still have passion on programming or your career.

You can come back someday. That kind of passion can motivate you. That kind of passion could be an engine to drive your life forward.

Audrey: As a personal anecdote, actually my brother has been teaching my father programming, for a while now, for a few months, and my father is in his 60s. He has a lot of students to teach, a lot of daily routines, three dogs, parents and everything.

I think the most important thing was putting your ideas somewhere that other people can see and improve on, and Ruby is a very quick way to do that of course. As long as you have a GitHub account you can just push something there, or even just as a quest you don't have to create a repository.

This is so people could start working on a code and giving you ideas, and suggestions, and that. Even if GitHub seems very hard — actually for my dad it is — you can use other tools like Hackpad, or even like Google Doc, or Google Spreadsheet, or EherCalc, or something like that.

Any online spreadsheet, or document, or any online drawing tool. You can capture your ideas in a place where everyone can see and comment on. It's actually the first step to programming. I mean it in a social way as well, and not in a coding way. To go back to the proper question. I think for someone who's time is fragmented or limited, one way is to watch or participate. One of those ongoing projects that require some input from the crowd does not require your full-time dedication. I'm going to have an advertisement for g0v.

In g0v we have a project that’s going at the moment, where we took all the grids of the reports of the spreadsheet of people's donations to their political campaigns. It was locked away in PDF, and it's only allowed to be printed with water mark — they are not published online — and you have to pay for the copying fees. You can only take two pages, or 100 pages out at a time, it was very archaic, and it is because they don't have a budget to do it.

What we did was we asked people to take the copied papers out, and we scanned them, and upload them on DropBox, or on Google Drive. You don't need to be very technical to do that, and then algorithms split them into individual grids.

Then you can visit the site to see just one grid cell. It's like a Captcha, or a game, where you just see a picture and guess — maybe a name, or maybe a number and just type in a name or a number. With this crowd-source way we have 300,000 cells identified and counting. People visit and improve the code such that the donations are now transparent, and it becomes part of the communal property.

This motivates a lot of people that have no idea what programming is, to start helping us writing a better guideline. Like there’s a person who has no experience designing a web page, that just feel very strongly about the cause. They learn about how to do Google sites, or do a basic HTML programming, so that they could put their beautiful icons to the standard operating procedure for those things.

This is about putting something into where people can see, and to contribute on. Even though you only have like five minutes, or even just 15 seconds a day, you can feel that you're part of the community, and you get to know people, who once you have a little bit more time, would take you further along the path.

Audrey Tang | Pugs | 2014-06-22 00:54:10

On behalf of the Parrot team, I'm proud to announce Parrot 6.5.0, also known
as "Black-winged Lovebird". Parrot (http://parrot.org/) is a virtual
machine aimed
at running all dynamic languages.

Parrot 6.5.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/devel/6.5.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.5.0 News:
- Core
+ Re-add -DMEMORY_DEBUG support to the new GMS GC [GH #1073]
+ Added 2 new PMC method attributes :manual_wb and :no_wb and
worked over all core PMCs for unneeded GC write barriers.
Thereby removed the vtable method calling overhead of _orig into a
wrapper with the mandatory write barrier. This was the first part
of Chirag's GSOC project. [GH #1069]
+ find_codepoint: Added name aliases for control character names which
disappeared with ICU 5.2, and added those names to non-ICU
builds also.
Improved ICU search for u_charFromName() to check all
UCharNameChoices,
not only U_EXTENDED_CHAR_NAME. [GH #1075, roast #43]
- Build
+ Fixed wrong ICU header probes on multi-arch systems (debian)
[GH #1014]
+ Fix opengl on bsd which does not have __APPLE__ defined as 0
[GH #1070]
+ pmc2c was extended to improve write barriers and deal with :manual_wb,
:no_wb and RETURN() in VTABLE methods. [GH #1069]
- Documentation
+ Improved the docs for pmc and pmc2c [GH #1069]
+ Harmonized pmc names for the PMC html index [GH #1079]
- Tests
+ Fix t/op/gc.t for --gc=inf
+ Fix t/library/pcre.t for --without-pcre or windows
- Community
+ Our GSOC project succeeded in the first deliverable
+ Non-core dynpmc's with multiple return paths in writer
VTABLE methods will
need to be changed to use either :manual_wb or RETURN() as
in PCCMETHODs,
and can now be optimized for unneeded GC write barriers.
E.g. nqp 6model got 2-4% faster.

nqp should be bumped.
Fixes for nqp:
* icu detection on multi-arch systems (newer debian),
* icu workaround for control char name aliases (see roast #43),
* better GC writer barriers (for 6model in nqp branch pmc2c_orig)
parrot -O2 is not yet recommended for nqp/rakudo as it produced slower
code there.

The SHA256 message digests for the downloadable tarballs are:
249047f8fc2041ce460d3524547c10faf4462facdffd6b4f9b42f250640c79de
parrot-6.5.0.tar.gz
1f45044f8dcfaafef795e93a91c8f4a55dd8347cc0359ce4dcf6f34f7bfff140
parrot-6.5.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Our next scheduled release is 15 Jul 2014.

Enjoy!

--
Reini Urban
http://cpanel.net/ http://www.perl-compiler.org/

Perl 6 Announce | perl.perl6.announce | 2014-06-17 17:17:48

As I was riding to the airport the other day to pick up a friend, I stumbled across this math problem tweet:

Let <em>n</em> be a positive integer. We have <em>n</em> boxes where each box contains a
nonnegative number of pebbles. In each move we are allowed to take two pebbles
from a box we choose, throw away one of the pebbles, and put the other pebble
in another box we choose. An initial configuration of pebbles is called
solvable if it is possible to reach a configuration with no empty box, in a
finite (possibly zero) number of moves. Determine all initial configurations of
pebbles which are not solvable, but become solvable when an additional pebble
is added to a box, no matter which box is
chosen.

I started out drawing stuff in my notebook to solve it, but at some point I decided to bring in Perl 6. The solution turned out to be quite illustrative, so I decided to share it.

Below, I reproduce the problem specification, piece by piece, and interleave it with REPL interaction.

The problem

Let n be a positive integer. We have n boxes where each box contains a nonnegative number of pebbles.

$ perl6
> class Conf { has @.boxes where *.all >= 0; method gist { "[$.boxes]" } }
> sub conf(@boxes) { Conf.new(:@boxes) }; Nil
> sub n(Conf $c) { $c.boxes.elems }; Nil

I was a bit saddened to learn that the where clause on the attribute isn't enforced in Rakudo. There's now an RT ticket about that.

The Nil at the end of some lines is to quiet inconsequential or repetitive output from the REPL.

Let's take as our running concrete example the starting configuration [2, 0]. That is, two boxes, one with two pebbles and one empty. As we will see, this is one of the smallest answers to the problem.

> n(conf [2, 0])
2

In each move we are allowed to take two pebbles from a box we choose, throw away one of the pebbles, and put the other pebble in another box we choose.

> sub but(@list, &act) { my @new = @list; &act(@new); @new }; Nil
> sub add($c, $to, $count) { conf $c.boxes.&but(*.[$to] += $count) }; Nil
> sub remove($c, $from, $count) { conf $c.boxes.&but(*.[$from] -= $count) }; Nil
>
> sub move($c, $from, $to) { $c.&remove($from, 2).&add($to, 1) }; Nil
> sub moves-from($c, $from) { (move($c, $from, $_) for ^n($c)) }; Nil
> sub moves($c) { (moves-from($c, $_) if $c.boxes[$_] >= 2 for ^n($c)) }; Nil
> 
> moves(conf [2, 0])
[1 0] [0 1]

The condition if $c.boxes[$_] >= 2 ensures that we don't make a move when there aren't enough pebbles in a box.

An initial configuration of pebbles is called solvable if it is possible to reach a configuration with no empty box, in a finite (possibly zero) number of moves.

> sub has-empty-box($c) { so any($c.boxes) == 0 }; Nil
>
> has-empty-box(conf [2, 2, 2, 0])
True
> has-empty-box(conf [2, 2, 2, 1])
False

> sub is-solvable($c) { !has-empty-box($c) || so is-solvable any moves $c }; Nil
>
> is-solvable(conf [2, 0])
False
> is-solvable(conf [3, 0])
True

The definition of is-solvable is the first case where I feel that Perl 6 shines in this problem. That one-liner lets us perform a search using all possible moves for any configuration that has no empty boxes.

For example, if we did this:

> is-solvable(conf [4, 0, 0])

Then the tree search that happens in the background is this:

[4 0 0]
    [3 0 0]
        [2 0 0]
            [1 0 0]
            [0 1 0]
            [0 0 1]
        [1 1 0]
        [1 0 1]
    [2 1 0]
        [1 1 0]
        [0 2 0]
            [1 0 0]
            [0 1 0]
            [0 0 1]
        [0 1 1]
    [2 0 1]
        [1 0 1]
        [0 1 1]
        [0 0 2]
            [1 0 0]
            [0 1 0]
            [0 0 1]

...and is-solvable concludes that no matter how it moves the pebbles, it always ends up with a zero somewhere, so this configuration isn't solvable, and so the result is False.

By the way, we know that any search like this is finite, because every move reduces the net amount of pebbles.

Determine all initial configurations of pebbles which are not solvable, but become solvable when an additional pebble is added to a box, no matter which box is chosen.

> sub add-pebble($c, $to) { conf $c.boxes.&but(*.[$to] += 1) }; Nil
> sub add-pebble-anywhere($c) { (add-pebble($c, $_) for ^n($c)) }; Nil
> 
> add-pebble-anywhere(conf [2, 0])
[3 0] [2 1]

> sub is-answer($c) { !is-solvable($c) && so is-solvable all add-pebble-anywhere($c) }; Nil
> is-answer(conf [2, 0])
True
> is-answer(conf [4, 0, 0])
True

So as we see, our example configuration [2, 0] is a possible answer, because it is not in itself solvable, but adding a pebble in any of the two boxes makes it solvable. Similarly, the [4, 0, 0] that we tree-searched above isn't solvable, but becomes solvable with a pebble added anywhere.

Hostages, heroes and civilians

Having specified the problem thus far, I started to use to make it clearer in my mind by introducing idiosyncratic terminology. I started thinking of the empty boxes as hostages, because they need saving before the end of the day.

> sub hostages($c) { +$c.boxes.grep(0) }; Nil
> hostages(conf [2, 0])
1
> hostages(conf [3, 0, 0])
2

Likewise, some pairs of pebbles are heroes... but not all of them. First off, the two pebbles have to be in the same box to make up a hero.

Secondly, the bottom pebble is effectively fixed and cannot contribute to a hero. (Because if we removed it, there would be no pebbles left, and we'd have created another hostage.)

In other words, if we take the pebbles in a box, subtract one, divide by two, and round down, we get the number of heroes in that box.

> sub heroes($c) { [+] ($c.boxes »-» 1) »div» 2 »max» 0 }; Nil
> heroes(conf [2, 0])
0
> heroes(conf [3, 3, 0])
2

Heroes live to save hostages. In fact, any move which doesn't use a hero to save a hostage will just end up wasting a pebble. We can use this knowledge to define a better moves-from sub, restricting it to moves that save hostages:

> sub moves-from($c, $from) { (move($c, $from, $_) if $c.boxes[$_] == 0 for ^n($c)) }; Nil

The search moves faster with this condition. For example, the search tree from above gets trimmed to this:

[4, 0, 0]
    [2, 1, 0]
        [0, 1, 1]
    [2, 0, 1]
        [0, 1, 1]

Changing the literal 2 to 3 in the function moves (in recognition of the fact that the bottom pebble never figures in a viable move) cuts the tree down even further:

[4, 0, 0]
    [2, 1, 0]
    [2, 0, 1]

I noticed the pattern that any possible answer configuration I could come up with had the property that there was exactly one more hostage than there were heroes.

> sub one-more-hostage-than-heroes($c) { hostages($c) == heroes($c) + 1 }; Nil
> one-more-hostage-than-heroes(conf [2, 0])
True
> one-more-hostage-than-heroes(conf [3, 1, 0])
False

This makes intuitive sense: a configuration that is an answer needs to be not solvable (less than one hero per hostage), but it also needs to be just barely not solvable. That is, there has to be just one hostage too many.

Does this fully describe a solution, though? It turns out it doesn't, but in order to see it, let's bring in a testing tool.

Proving stuff with QuickCheck

We'll want to generate thousands of random configurations for this, so I defined the following two routines. The configuration space is infinite, and it was hard to know how to choose configurations randomly. In the end I favored an approach with small finite configurations with relatively few pebbles, hoping it would catch all relevant cases.

sub random-box { Bool.pick ?? 0 !! (1..5).pick }

sub random-conf {
    my $n = (0..5).pick;
    conf [random-box() xx $n];
}

Next up, a function that tests a certain property on a lot of random configurations. It's not a total guarantee of correctness, but once you've tested something against 1000 random inputs, you can have a fairly high confidence that no exception has slipped through. Think of it as a kind of probabilistic proof.

sub quickcheck(&prop, $N = 1000) {
    for ^$N {
        print "." if $_ %% 20;
        my $c = random-conf;
        return "Counterexample: $c.gist()" unless &prop($c);
    }
    return "All $N cases passed.";
}

First up, let's test the statement that if some configuration is a solution, then it has one more hostage than it has heroes.

Because these properties end up talking a lot in terms of if-then relationships, let's create a operator for logical implication.

sub infix:«⇒»($premise, $conclusion) { !$premise || $conclusion }

sub if-answer-then-one-more-hostage($c) {
    is-answer($c) ⇒ one-more-hostage-than-heroes($c);
}

> quickcheck &if-answer-then-one-more-hostage
..................................................All 1000 cases passed.

Ok, that turns out to be true. How about in the other direction?

sub if-one-more-hostage-then-answer($c) {
    one-more-hostage-than-heroes($c) ⇒ is-answer($c);
}

> quickcheck &if-one-more-hostage-then-answer
.Counterexample: [0 1]

This is why QuickCheck-based testing is great; it not just tells us that something fails, it also gives us a counterexample by which we can see clearly how and why it fails. In this case, that 1 in there is not enough to save the hostage. Nor is it enough if that box gets another pebble.

Clearly there is some factor at work here besides hostages and heroes.

We've accounted for that bottom pebble, the useless one that we can never do anything with. On top of it are zero or more pairs of pebbles; our heroes. But on top of that can be yet another pebble; let's define a lone pebble like that to be an everyday hero, because all it takes is a small push (one more pebble) to create a hero out of an everyday hero.

The bottom pebble + pairs of pebbles for heroes + everyday hero pebble = a positive even number of pebbles. So the easiest way to state "this box is either a hostage or an everyday hero" is to say "there's an even number of pebbles in this box".

Let's see if adding that condition is enough to predict answers.

sub all-hostages-or-everyday-heroes($c) { so $c.boxes.all %% 2 }
sub if-one-more-hostage-and-all-hostages-or-everyday-heroes-then-answer($c) {
    (one-more-hostage-than-heroes($c)
        && all-hostages-or-everyday-heroes($c))
        ⇒ is-answer($c)
}

> quickcheck &if-one-more-hostage-and-all-hostages-or-everyday-heroes-then-answer
..................................................All 1000 cases passed.

It is enough! Now that we know if it's a sufficient condition, let's find out if it's also a necessary one.

sub one-more-hostage-and-all-hostages-or-everyday-heroes-means-answer($c) {
    (one-more-hostage-than-heroes($c)
        && all-hostages-or-everyday-heroes($c))
        == is-answer($c)
}

> quickcheck &one-more-hostage-and-all-hostages-or-everyday-heroes-means-answer
..................................................All 1000 cases passed.

Ooh, and it is! Lovely.

Notice how much of a simplification this brings about. The two conditions we just defined (one-more-hostage-than-heroes and all-hostages-or-everyday-heroes) just check surface properties of a configuration, whereas is-answer has to perform a possibly large tree search. But quickcheck tells us that the combination of the two conditions is completely equivalent to the whole tree search.

Awesome.

Just to bring that point home, let's drop all the cute terminology, and just write it in terms of the mathematical properties we need to check:

sub pebbles-are-twice-boxes-minus-two-and-all-boxes-even-means-answer($c) {
    ([+]($c.boxes) == 2 * n($c) - 2 && so($c.boxes.all %% 2))
        == is-answer($c)
}

> quickcheck &pebbles-are-twice-boxes-minus-two-and-all-boxes-even-means-answer
..................................................All 1000 cases passed.

And that's the answer.

(You can also read the solution here, problem 5.)

Enumerating all answers

We might consider ourselves having solved the problem completely, but it feels a bit weird to leave it at that. Can't we get a list of all the answers too?

I started writing a custom recursive solution, but ended up recognizing what I was doing from the output I was getting. (And from the fact that the number of answers of each size led me to this OEIS sequence.)

What we're looking for is really a kind of integer partitions. That makes sense; we have a fixed number of pebbles, and we want to distribute them among the boxes in all possible ways.

As one does nowadays, I went out on Stack Overflow to look for a suitable algorithm to compute integer partitions. Found this elegant Python solution. This is my Perl 6 rendering of it:

sub partitions($n) {
    uniq :as(*.Str), gather {
        take [$n];
        for 1..^$n -> $x {
            take ([($x, .list).sort] for partitions($n - $x));
        }
    }
}

Of course, once we have the partitions, we need to massage them a little bit. To be exact, we reverse the partition (because I like reading them in descending order), double the numbers (to get only even numbers), and we pad with zeroes at the end.

sub double(@list) { @list »*» 2 }
sub pad(@list, $size) { [@list, 0 xx ($size - @list.elems)] }
sub all-answers($n) { (.reverse.&double.&pad($n) for partitions($n - 1)) }

Note by the way that these answers are "symmetry broken". For each solution, the order of the boxes is immaterial to the problem, so all permutations of boxes are also viable answers. So picking a canonical order and sticking with it makes the output a lot smaller without missing anything essential.

Finally, we print the answers. Sorting is not necessary, just esthetic.

sub array-cmp(@xs, @ys) { [||] @xs Z<=> @ys }

for 1..* -> $n {
    my @answers = all-answers($n).sort(&array-cmp);
    say "{@answers.elems} answers of size $n:";
    say "  ", .&conf for @answers;
}

This is how they look. These are just the first seven iterations; it goes on for a while.

1 answers of size 1:
  [0]
1 answers of size 2:
  [2 0]
2 answers of size 3:
  [2 2 0]
  [4 0 0]
3 answers of size 4:
  [2 2 2 0]
  [4 2 0 0]
  [6 0 0 0]
5 answers of size 5:
  [2 2 2 2 0]
  [4 2 2 0 0]
  [4 4 0 0 0]
  [6 2 0 0 0]
  [8 0 0 0 0]
7 answers of size 6:
  [2 2 2 2 2 0]
  [4 2 2 2 0 0]
  [4 4 2 0 0 0]
  [6 2 2 0 0 0]
  [6 4 0 0 0 0]
  [8 2 0 0 0 0]
  [10 0 0 0 0 0]

So, it has come to this

I put all the code from this blog post in a gist if anyone wants to play with it.

This problem is now officially flushed out of my system. I like how Perl 6 rose to the challenge of helping me solve it. I'm also positively surprised by the "feel" of doing QuickCheck testing. Gotta do more of that.

I worked under a self-imposed restriction that things written in the REPL ought to fit on one line. It made me reach for ways to chunk ideas into functions, which I think ended up bringing out the intent of each step a bit better.

Finally, although I knew it from before, junctions and hyperops and ranges and list comprehensions and functions and metaoperators and custom operators and lazy lists... they all conspire to make problem solving and exploratory programming like this a really pleasant experience.

Carl Masak | Strangely Consistent | 2014-06-16 23:49:10

Wow, this week I’ve even done some stuff! I’ven’t been active in the source code for a while, so it felt quite refreshing. Here’s all (well, at least some of) the nice things that have turned up over the last 6 days:

  • jnthn has merged the “inline” branch of MoarVM that adds the ability to inline simple bytecode segments into their caller’s bytecode, thus getting rid of a nice chunk of invocation cost. Sadly, it currently bails out if it sees exception handlers or “return” handlers, which is extremely common in actual Perl 6 code. Thus, the improvements are mostly visible in NQP code.
  • I added a few more methods to the cairo binding and started on the GtkDrawingArea class for the gtk3 binding. There’s a whole lot of stuff involved before enough stuff is in place to make cairo-based animations work well.
  • I also implemented a Supply combinator called “zip-latest”. It will generate a tuple (or apply your custom sub) every time a new value comes in from any of the supplies, as opposed to the “zip” combinator that waits for all Supplies to have a new value available.
  • lizmat did a whole bunch of commits related to CompUnitRepo and friends. It’ll be exciting to see the whole potential of the infrastructure used, for example applications bundled with all their dependencies in a single executable file and other kinds of things.
  • brrt continued his GSoC work on the MoarVM Just In Time Compiler. The current piece of code that’s being used as an example is running the following subroutine in a loop:
    sub foo() {
        nqp::say("OH HAI");
        return 12 - 6;
    }

    The JIT compiler turns the whole function directly into runnable machine code, which at the very least eliminates the interpreter overhead.

  • Chirag Agrawal and Reini Urban have pushed more work in their optimization efforts for write barriers. Reini reports a 2-4% performance improvement in the NQP test suite after annotating all the 6model classes correctly. This work is part of the  Parrot 6.5.0 release (planned for tomorrow). In addition, rakudo-parrot is going to pass all spectests again (when a pull request is applied). Cool stuff!
  • dwarring could be considered on a roll, as he’s continuously writing spectests for the Perl 6 Advent Calendar of the past years :)
  • Kamil Kułaga added a utility program for Lacuna Expanse written in Perl 6 to the ecosystem.
  • Michal Jurosz added a simple templating library for Perl 6 to the ecosystem.

That’s it from me for this week’s post. I hope your week is productive and pleasant :)


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-06-16 17:05:07

Those of you who have followed #moarvm or github closely may already know, but this week I've finally checked in code that calculates 2 + 2 = 4 and returns that value to its' caller. To be very specific, I can make a frame that does the following operations:


const_i64_16 r0, 2
const_i64_16 r1, 2
add_i, r2, r1, r0
return_i r2

As a proof of concept, this is a breakthrough, and it shows that the strategy we've chosen can pay off. I didn't quite succeed without help of FROGGS, jnthn, nwc10, timotimo and others, but we're finally there. I hope. (I'll have to see about windows x64 support). The next thing to do is cleanup and extension. Some objectives for the following week are:
  • Cleanup. The JIT compiler still dumps stuff to stderr for my debugging purposes, but we shouldn't really have that. I've tried moving ad.all output to the spesh log but I can hardly find the data in there, so I think I'll make a separate JIT log file instead. Similarly, the file for the JIT compiler's machine code dump - if any - should be specified. And I should add padding to the dump, so that more than one block can be dumped.
  • Adding operations to compile. MoarVM supports no fewer than 638 opcodes, and I support 4 yet. That is about 0,62% of all opcodes :-). Obviously in those terms, I have a long way to go. jnthn suggested that the specialized sp_getarg opcodes are a good way to progress, and I agree - they'll allow us to pass actual arguments to a compiled routine.
  • Translate the spesh graph out of SSA form into the linear form that we use for the JIT 'graph' (which is really a labeled linked list so far).
  • Compile more basic blocks and add support for branching. This is probably the trickiest thing of the bunch.
  • Fix myself a proper windows-x64 virtual machine, and do the windows testing myself.
  • Bring the moar-jit branch up-to-date with moarvm master, so that testers don't have such a hard time.
 As for longer-term goals,we've had some constructive contact with Mike Pall (of LuaJit / DynASM fame), and he suggested ways to extends DynASM to support dynamic registers. As I've tried to explain last week, this is important for 'good' instruction selection. On further reflection, it will probably do just fine to introduce expression trees - and the specialized compiler backend for them, which would need register selection - gradually, i.e. per supported instruction rather than all at once.

However, the following features are more important still:
  • Support for deoptimisations. Up until now (and the foreseeable future) we keep the memory layout exactly the same
  • JIT-to-interpreter calls. This is a bit tricky - MoarVM doesn't support nesting interpreters. What we'll have to do instead is return to the interpreter with a label that stores our continuation, and continue at that continuation when we return.
  • At some point, JIT-to-JIT calls. Much the same problems apply - in theory, this doesn't have to differ from JIT-to-interpreter calls, although obviously we'd rather optimise the interpreter out of this loop.
  • Support for exceptions, obviously, which - I hope - won't be as tricky as it seems, as it ultimately depends on jumping in the bytecode at the right place.
  • Support for simple optimisations, such as merging various MoarVM opcodes into a single opcode if that is more suitable.
So that is it for now. See you next week!

Bart Wiegmans | brrt to the future | 2014-06-14 01:07:04

In order to smooth the transition back to the “every monday” schedule, I decided to write this post on tuesday instead of monday. Here’s some of the things that have happened since last wednesday:

  • Filip Sergot added HTTP::UserAgent (but only ::Simple so far) to the ecosystem.
  • brrt has been working on the MoarVM JIT based on DynASM, but it seems like there’s a problem regarding dynamic register usage in DynASM. Still, with only static registers, a JIT could remove the overhead introduced by decoding and dispatching opcodes in the interpreter, better performance with the CPU cache, …
  • Chirag Agrawal and Reini Urban had been improving parrot s pmc2c and PMCs by removing nested method calls and unneeded write barriers. It doesn t break the rakudo build, but performance measurements are still to be done. PMC write barriers can now also be manually optimized. See this changelog patch for some more details.
  • jnthn has started work on letting MoarVM’s spesh inline code at specialize-time. Currently, NQP won’t start with it, but just looking at the log output shows a few nice opportunities already being considered for inlining.
  • dwarring put more tests into the test suite for the advent calendar posts.
  • donaldh worked on the socket tests for JVM and async socket tests in general.
  • XFix wrote a decoder and encoder for DSON, a very promising data serialization format. It’s derived from JSON::Tiny and re-uses a lot of its code; it’s a nice example for subclassing grammars: Compare the Actions.pm from JSON::Tiny and the Actions.pm from Acme::DSON.
  • the dyncall version included in some of our repositories were causing trouble on FreeBSD. That was worked on.
  • Ulti split off the statistics functions from his BioInfo module into a stats module. It currently has mostly average-related things.
  • FROGGS put all the necessary stuff in place to allow slangs to be written in Pure Perl 6 and is going to modify his v5 module to be written in Perl 6 rather than NQP in the future.
  • Larry Wall added two nice bits to STD.pm: Don’t throw errors about deprecated special variables inside signatures and warn about duplicated characters in character classes. For anybody looking to get into rakudo development, porting these patches to Rakudo would be a very nice low-hanging fruit.

Thank you for your patience. May your week be a pleasant one :)


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-06-10 19:48:27

Today is the day I've both created an implementation of the 'JIT graph' and destroyed it. (Or rather stashed it away in a safe branch, but you get the point). The current HEAD of moar-jit has nothing that should deserve a name like 'JIT graph'. it is merely a thin layer around MVMSpeshGraph. So I thought maybe I should explain why I did this, what the consequences are, and what I'll do next.

First of all, let me explain why we wanted a 'JIT graph' in the first place, and what I think it ought to be. MoarVM contains a bytecode specialization framework called spesh. My current project to write a JIT compiler can be seen as an extension of this framework. Also, the core data structure of spesh - namely, MVMSpeshGraph - is also the input to the JIT compiler. I've promised a thorough walkthrough of spesh and you'll get it, but not today, today I have another point to make. That point is that although the spesh graph applies some sophisticated transformations upon the source bytecode, it is in essence still MoarVM bytecode. It still refers to MoarVM instructions and MoarVM registers.

Now that is perfectly alright if you want to eventually emit MoarVM instructions as it has done up until now. However there is still quite a layer of abstraction between MoarVM and the physical processor that runs your instructions. For example, in MoarVM acquiring the value of a lexical is a simple as a single getlex instruction. For the CPU there are several levels of indirection involved to do the same, and quite possibly a loop. The goal of the 'JIT graph' then was to bridge these levels of abstraction. In effect, it is to make the job of the (native) code generator much simpler.

I think the best way to explain this is with an example. Given the following MoarVM instruction:


add_i r0, r1, r2

I'd like to construct the following tree:

store --> address --> moar-register(r0)
\-> value --> add --> load --> moar-register(r1)
\-> load --> moar-register(r2)

I think we can all criticize this structure for being verbose, and you'd be correct, but there is a point here. This structure is suitable for tree-matching and rewriting during code generation - in short, for generating good code. (Simpler algorithms that emit lousy code work too :-)). There are too many nice things I have to say about this structure. But it depends critically on my capability to select the registers on which operations take place. And as it turns out, on x86_64, I can't. Or on any other architecture than x86. Oh, and LuaJit doesn't actually use DynASM to compile its JIT, what do you know.

Actually, I kind-of could've guessed that from the luajit source. But I didn't, and that is my own dumb fault.

So, what to do next? There are two - or three, or four - options, depending on your level of investment in the given tools. One such option is to forgo register selection altogether and use static register allocation, which is what I did next. If we do that, there is truly no point in having a complicated graph, because all information is already contained in the MoarVM instructions themselves, and because you can't do anything sensible between instructions. After all, static register allocation means they're always the same. In essence, it means translating  the interpreter into assembly.  For most instructions, this approach is trivial - it could be done by a script. It is also rather unambitious and will never lead to much better performance than what the interpreter can do. Maybe 2x, but not 10x, which is what I think should be doable.

The other option is to do register selection anyway, on top of DynASM, just because. I'm... not sure this is a great idea, but it isn't a terrible idea, either. In essence, it involves writing or generating giant nested switch structures that emit the right code to DynASM, like so, but everywhere, for every instruction in which you'd want this. I don't think that is particularly tractable, but it would be for a preprocessor.

The third option is to fix DynASM to do dynamic register allocation on x86_64 and any architecture you need it. This is possible - we maintain a fork of DynASM - but it'd involve deep diving into the internals of DynASM. What is more, Mike Pall who is vastly more capable than I am decided not to do it, and I'm fairly sure he had his reasons. The fourth option is to look for another solution than what DynASM provides. For while it is certainly elegant and nice, it may not be what we ultimately want.


Bart Wiegmans | brrt to the future | 2014-06-09 15:03:33

Hi everybody! As it seems that a JIT compiler doesn't fall into place fully formed in a weekend, I've decided to set myself a few goals - along with smaller subgoals that I hope will help keep me on track. The immediate goal for the week is to compile a subroutine that adds two numbers and returns them, like so:


sub foo() {
return 3 + 4;
}

Which is literally as basic as you can get it. Nevertheless, quite a few parts have to be up and moving to get this to work. Hence the list. So without further ado, I present you:

  • Modifying the Configure / Make files to run DynASM and link the resulting file.
I've actually already done this, and it was more complicated than it seems, and I'm still not completely happy about it.
  • Obtaining writable memory that can be marked executable
  • Marking said memory executable and non-writable (security folks!)
I plan to do this by hijacking MVM_platform_allocate_pages(), which nobody uses right now.
  • Determine, for a given code graph, whether we can JIT compile it. 
    • Called MVM_can_jit_graph(MVMSpeshGraph*)
  • Transforming a Spesh graph into a JIT graph
    • Note that I don't know yet what that JIT graph will look like.
    • I think it will hold values along with their sizes, though. I'm not sure the spesh graph does that. 
  • Directly construct our very simple code graph, by hand, using MAST.
  • JIT compiling the very simple code graph of our code.
  • UPDATE: attach a JIT code segment to a MVMStaticFrame
  • Calling and returning from that code.
This... will probably be a bit experimental - it's of no use to throw in a full-fledged register allocation and instruction selection algorithm to add two constant numbers. We can - in principle - also do without these, but it will lead to rather poor machine code. 

I've probably forgotten quite a few things in here. But this seems like a start. If there's something you think I missed, please comment :-)

Bart Wiegmans | brrt to the future | 2014-06-04 13:52:29

Hi there,

I’ve been surprisingly busy the last two days and didn’t get to write the weekly yet. Curiously, there wasn’t terribly much to write about anyway. So I’m taking the first few days of this week into account as well, so there’s more exciting stuff. Here goes:

  • lizmat and FROGGS have been working on implementing the new and improved spec S11 about modules and importation and versioning and stuff. That work had been happening in a branch until recently and it’s now on the master branch. I probably ought to write up what cool stuff you can use it for when the next release comes up unless somebody beats me to it.
  • donaldh has improved the IO related pieces of the core setting by removing special cases for different back-ends and partially re-implementing things as nqp ops there.
  • rakudo now honors a RAKUDO_MAX_THREADS environment variable to change the default amount of tasks that should be run at the same time in the ThreadPoolScheduler. This can still be overruled by creating a ThreadPoolScheduler with a specific max_threads, though.
  • jnthn has fixed a bunch of sundry problems: for loops with an explicit or implicit $_ that is marked “rw” used to clobber the outside $_, sub-signature binding (AKA destructuring assignment) used to turn itemized things into lists regardless of provided sigil, and a LAST phaser in a loop used to fire even if the loop didn’t run even once.
  • jnthn has taken up my preliminary work to make MoarVM’s bytecode specializer handle calls with named parameters as well.
  • lizmat has done a bunch of work on $*PERL/$?PERL, $*VM, $*USER, $*DISTRO, and many more.

Other things worth pointing out:

As usual, I hope you’ll have a pleasant rest-of-week :)


Weekly changes in and around Perl 6 | Weekly changes in and around Perl 6 | 2014-06-04 09:48:24

Hi there!

First two weeks of Google Summer of Code have just ended, it's time for a summary!

I posted about HTTP::Headers already so this post will be about HTTP::Message, HTTP::Cookies and something I didn't plan to have as a standalone module: DateTime::Parse (FROGGS++).

HTTP::Message

This module wraps every HTTP message receiving from servers.

    use HTTP::Message;

    my $msg =
        "HTTP/1.1 200 OK\r\n"
      ~ "Server: Apache/2.2.3 (CentOS)\r\n"
      ~ "Last-Modified: Sat, 31 May 2014 16:39:02 GMT\r\n"
      ~ "ETag: \"16d3e2-20416-4fab4ccb03580\"\r\n"
      ~ "Vary: Accept-Encoding\r\n"
      ~ "Content-Type: text/plain; charset=UTF-8\r\n"
      ~ "Date: Mon, 02 Jun 2014 17:07:52 GMT\r\n"
      ~ "X-Varnish: 1992382947 1992382859\r\n"
      ~ "Age: 40\r\n"
      ~ "Via: 1.1 varnish\r\n"
      ~ "Connection: close\r\n"
      ~ "X-Cache: HIT\r\n"
      ~ "X-Cache-Hits: 2\r\n"
      ~ "\r\n"
      ~ "008000\r\n"
      ~ "# Last updated Sat May 31 16:39:01 2014 (UTC)\n"
      ~ "# \n"
      ~ "# Explanation of the syntax:\n";

    my $m = HTTP::Message.new.parse($msg);
    say ~$m;

Yes, we have just parsed a HTTP message, now - we can edit it

    $m.add-content("Some new content!!");
    say "content:" ~ $m.content;

    $m.header( Vary => 'Age' );
    say $m.header('Vary');

... and remove one header:

    $m.remove-header('Via');

... or delete the whole message:

    $m.clear;

We can write HTTP::Request and HTTP::Response now, using this HTTP::Message module.

The plan is to make it able to handle encoding stuff (like chunked encoding).

HTTP::Cookies

Another accomplishment is the HTTP::Cookies module, what makes us able to store HTTP cookies.

Here is an example:

    use HTTP::Cookies;

    my $file = './cookies.dat';

    my $c = HTTP::Cookies.new( :$file, :autosave );

    $c.set-cookie(
        'Set-Cookie: name1=value1; Expires=DATE; Path=/; Domain=somedomain; secure'
    );

    say ~$c;

The 'autosave' option means that every change will be saved immediately.

We can find our cookies in $file too:

    $ cat cookies.dat 
     #LWP6-Cookies-0.1
     Set-Cookie: name1=value1; Expires=DATE; Path=/; Domain=somedomain; secure

... later, we can load this file:

    $c.load;

HTTP::Request and HTTP::Response will use this module for cookies handling, so we'll be able to e.g. log into a website etc.

DateTime::Parse

Another thing, which actually appeard unexpectedly, is DateTime::Parse module. We can use it, to parse e.g. HTTP dates (like Last-Modified: Sat, 31 May 2014 16:39:02 GMT). It supports RFC1123 and RFC850 time formats for now.

It is built using very powerful Perl 6 feature: Grammar and Actions.

We are able to compare dates like this:

    say Date.today < DateTime::Parse.new("Sat, 31 May 2014 16:39:02 GMT").Date;

As you can see, we're losing the time in this comparision, it'll be improved I hope.

Plans

FROGGS, mortiz and I decided to change the name of LWP modules to HTTP, so from now it's not a LWP::UserAgent but HTTP::UserAgent. The reason was that we want to keep all the modules used to http stuff, with the same name: HTTP. :)

It is the third week of Google Summer of Code and here is the plan:

  • complete HTTP::Request and HTTP::Response modules
    • with cookies working
    • with encoding/decoding working
  • write HTTP::Simple
  • write lwp tools: lwp-request, lwp-dump and lwp-download (should we name them http-* as well?)

I really enjoyed the first two weeks of coding under care of awesome mentors.

Do you wonder about participating in next year? You should!

Filip Sergot | filip.sergot.pl | 2014-06-03 00:00:00

<flussence> as a minecraft player I figured out what t4 was asking pretty much instantly :)

This is me trying to emerge from the big strange writer's block that has inexplicably formed around the t4 blog post. Here goes.

The t4 task was my clear favorite this year. It has a certain William Gibson quality to it, with virtual rain falling inside a three-dimensional world where everything is made of cubes which mostly just hang there, suspended, in mid-air.

## Simulate rain in a world of cubes

Write a program that calculates the volume of rain water collected in the cube
world described below.

The cube world &mdash; given as input &mdash; consists of a finite set of cubes
on integer coordinates `(x, y, z)`. The positive `y` coordinate means "up".

An infinite amount of rain then falls from an infinite height. Both of these
infinities are taken to really mean "large enough as to make no difference".
As it lands on cubes, the water will follow predictable rules:

* Rain falls everywhere.

* Water falling will land on the first cube below it. It does not fall through
  cubes.

* Water will collect on levels where walls on all sides will keep it in.

* Water will produce vertical waterfalls where such walls are missing.

* Cubes are packed tightly enough that gaps between cubes sharing an edge will
  not let water through. However, the same gaps will readily let air through if
  water needs to displace air for some reason.

Waterfalls work in the simplest way imaginable: if water "escapes" from a      
structure of cubes, it will fall straight down along the first available
"chute" of cube-formed empty cells until it hits a cube. (Which it may not
necessarily do. A waterfall may go on to infinite depth.) As a waterfall hits a
cube, it behaves just like other kinds of water: it may spread, collect, and
form new waterfalls as needed.

People had different ideas how to solve this one:

  • Massive flood. Fill the whole universe with water, and then carefully drain it, taking note of what's left.

  • Multiple joining pools. Keep track of all the individual bodies of water. Raise the water level as long as that's still possible, and join together bodies of water that touch.

  • Waterfall, Frozen. Track all bodies of water, following waterfalls in the forwards direction. For each cell proven to contain steady-state water, turn that block into solid wall, and increase a counter by 1.

I had fun guessing what solutions people would come up with. I correctly guessed the first two, but not the last one. I guess it's a bit too mutable for my FP brain to come up with these days.

Anyway, the mistakes! Oh, the mistakes. Not just one or two contestants for this one; all of them. Turns out simulating rain on cubes is hard!

Here follows a choice list of assumptions broken by the contestants, that make their programs return odd results.

Assuming that rain can reach where it can't

XXX
X.X
X.X
XXX

Let me explain the above picture. In order to test the four entrants against odd cases, I wrote a small program that builds a cube world from the above syntax. It only describes a cross-section; and so walls in the depth direction are automatically added. In other words, the above depicts a sealed box with no way in.

It should contain no rainwater, of course. One of the programs returns that it's full of water.

Oh, and by the way, the script that produces coordinates from pictures like the above turned out quite cute and simple, so let me share it:

my %coords =
    ' ' => [         ],
    'X' => [-1, 0, +1],
    '.' => [-1,    +1],
    '~' => [-1,    +1],
;

for lines.kv -> $y, $line {
    for $line.comb.kv -> $x, $char {
        for %coords{$char}.list -> $z {
            say "($x, {-$y}, $z)";
        }
    }
}

Assuming that the water can rise higher than its lowest outlet

  XXX
  X.X
  X.X
X~X~X
X~X~X
X~~~X
XXXXX

It's for cases like this that I felt a need in the problem description to talk about gaps between cubes that "will readily let air through if water needs to displace air". In other words, if the above is a kind of barometer, then it's a completely useless one, because it leaks air and water find an equilibrium based only on itself.

...which means that the correct answer above is 7. That's the number of waterfilled cubes when the water level is the same "inside" the barometer and at its mouth.

One of the programs got 9, assuming that the barometer fills up completely. Two programs got 0, assuming no water can even enter.

Speaking of which...

Assuming that some vessels are unable to contain water

    XXX
X~X X~X
X~XXX~X
X~~~~~X
XXXXXXX

Two programs had trouble with this one. I don't know if it's because of the banana shape or the cover over one of the ends. But they got 0 cells of rainwater collecting in it, when the correct answer is that it fills up all 9 internal cells.

Underestimating the size of a vessel

XXXX~XXXX
X~~~~~~~X
X~~~~~~~X
X~~X~X~~X
X~~X~X~~X
XXXXXXXXX

A small vessel sitting in a bigger vessel. A naive program might reach the brim of the small vessel, figure "oh, ok, we're done here", and then not fill up the bigger vessel with water.

This happened with one of the programs.

Concreteness and TDD

I've mentioned it in previous posts, but the way I pick problems for the contest is I find problems where I myself go "oh, that's easy, I'll just..." and then a while later, I go "...oh wait." Problems that look easy on the surface, but then turn out to have hidden depths. (A bit like these vessels holding water can have hidden depts, tunnels, nooks and crannies.) One of my favorite feelings when I design something is having the model "break" for a certain case. It's like the floor falling out from under me, and I have to re-orient myself inside the solution space to accomodate the new rules.

All the failures above emphasize the need for having actual test cases to run the program against. The base tests I send with the problems are (intentionally) inadequate for this purpose. The contestant is meant to think up their own tests, consider edge cases, special cases, and pathological cases.

To me, that's where unit testing shines. Development suddenly becomes a back-and-forth discussion between you and the programming substrate over something very tangible: concrete cases.

Only one champion still standing

Only one of the programs passes all of the above tests with flying colors. Well, I do want to stress that all four contestants made brave efforts. But for one reason or another, one of the four programs ended up especially correct.

Check out the reviews for details.

...no, wait

Hm. What about this case?

XXXX~XXXX
X~~~~~~~X
X~~XXX~~X
X~~~~~~~X
XXXXXXXXX

Should be able to hold 19 cells of water, right? Well, wouldn't you know. Our so-far unblemished program fails this one, with the cryptic error message Merging non-balanced water masses. (Two other programs get the correct 19, and the last one gets 0.)

So I take it back. None of the programs are correct. Pity. But my points about deep model thinking and representative test cases still stands. Correctness is hard!

Next up: distributing weights evenly in bags.

Carl Masak | Strangely Consistent | 2014-05-30 16:01:27

During Polish Perl Workshop 2014 Carl Mäsak showed us how to model Feline Hotel application.
But he forgot one thing - that cats own the Internet and they want to browse and reserve rooms online!
I will pick up where he left off and show you how to publish API and go live in a blink of an eye.


So let's create modern Feline Hotel in Perl 6!

    class FelineHotel;

	has %!rooms =
	    403 => {
	        'type'      => 'Standard',
	        'equipment' => [ 'bed', 'bowl' ],
	        'price'     => 64,
	        'available' => True,
	    },
	    406 => {
	        'type'      => 'Purrific',
	        'equipment' => [ 'bed', 'bowl', 'toys', 'jacuzzi' ],
	        'price'     => 128,
	        'available' => True
	    };

	method browse_rooms ( ) {
	    return %!rooms.grep( { .value{ 'available' } } ).hash;
	}

	method reserve_room ( Str $name!, Int $number! ) {
	    self!check_room( $number );

	    return not %!rooms{ $number }{ 'available' } = False;
	}

	method !check_room ( Int $number ) {
	    die 'No such room'
                unless %!rooms{ $number }:exists;
	    die 'Room not available'
                unless %!rooms{ $number }{ 'available' };
	}

Application lives in FelineHotel.pm file and has very simple interface - browse_rooms method returns all available rooms and reserve_room method allows to make reservation by giving cats name and room number. Reservation calls private method check_room and fails if room does not exist or is not available.
But how to write an API that allows online clients to connect and use those functions? Just create server.pl file.

    use FelineHotel;
    use JSON::RPC::Server;

    JSON::RPC::Server.new( application => FelineHotel.new ).run;

Then run it in your interpreter.

    $ perl6 -I. server.pl

It should print Started HTTP server and hang waiting for connections on port 8080.
That's ALL, your Feline Hotel just went live.


To see whole picture let's create client application.

	use JSON::RPC::Client;

	my $feline_hotel = JSON::RPC::Client.new(
            url => 'http://localhost:8080'
        );

	say 'Hotel has following rooms available:';
	say $feline_hotel.browse_rooms( );

	say 'Nyan cat makes reservation of room 403:';
	say $feline_hotel.reserve_room( 'Nyan', 403 );

	say 'Hotel has following rooms available:';
	say $feline_hotel.browse_rooms( );

You can run it on the same machine as server. Or on any remote machine if you have port 8080 forwarded - in that case change url param in third line.
And you will see that Nyan cat just reserved room online and this room is not available anymore.
Meooow!


Because our Feline Hotel is working we have time for little code dissection.
Whole functionality is wrapped into FelineHotel class. Module JSON::RPC::Server takes instance of this class and exposes its public methods for outside world to use.
On the other side of Internet cable JSON::RPC::Client invokes those methods just as if they were declared in local code.

This technique is called Remote Procedure Call and uses JavaScript Object Notation format to exchange data in a way formalized by JSON-RPC 2.0 protocol.

Go ahead and try to create your own services or improve this Feline Hotel if you want. You will quickly realize that JSON::RPC module not only hides networking stuff that happens between client and server but it tries to make your life easier in "Do What I Mean" way. For example you can overload methods on server. And catch exceptions in client.

To demonstrate this let's say that we want to refuse reservation for Grumpy cat because he always gives bad reviews online. It is as simple as overloading reserve_room method in server:

	multi method reserve_room ( Str $name!, Int $number! ) {
	    self!check_room( $number );

	    return not %!rooms{ $number }{ 'available' } = False;
	}

	multi method reserve_room ( "Grumpy", Int $number! ) {
	    die 'No!';
	}

Now when Grumpy tries to make reservation from client:

	say 'Grumpy cat makes reservation of room 406:';
	try {
	    $feline_hotel.reserve_room( 'Grumpy', 406 );
	    CATCH { default { .say } }
	}

It will fail with following error:

	Internal error (-32603): "No!"

You can also use named params on both client and server side, advanced methods signatures, batches of requests, notifications and much more. Moreover - all this stuff is not language dependent. You can connect to Perl 6 server using JSON-RPC 2.0 client in PHP/Ruby/Java/etc or use Perl 6 client to call JSON-RPC 2.0 based APIs written in any languages.

Have fun and cheers from APIcon San Francisco 2014!


Pawel bbkr Pabian | Pawel bbkr Pabian | 2014-05-28 15:42:40

(Dateline: 2014-05-21)

On behalf of the Parrot team, I'm proud to announce Parrot 6.4.0, also known
as "Double-eyed Fig Parrot". Parrot (http://parrot.org/) is a virtual machine aimed
at running all dynamic languages.

Parrot 6.4.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/devel/6.4.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.4.0 News:
- Examples
+ Enhance shootout/regexdna.pir to test GC write barrier crashes
- Community
+ Our GSOC project did officially start. See https://github.com/ZYROz/parrot


The SHA256 message digests for the downloadable tarballs are:
025bfe953211d09af6a4d80b13b4e7fef2bfaa055963b76f1bf674440c0cdbba parrot-6.4.0.tar.gz
419ddbd4c82b08e4ab1670a67c2a222120d34090413e2d4ecef9cb35f9b0bef0 parrot-6.4.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Our next scheduled release is 17 Jun 2014.

Enjoy!

Perl 6 Announce | perl.perl6.announce | 2014-05-27 20:07:42

Hey people!

I participate in Google Summer of Code this year, my project is to add TLS/SSL support to Perl 6, with all HTTP::* modules and LWP::UserAgent as well!

This will be a very good summer! Well, we'll be able to write a lot of new stuff after ending this project.

This is my first post about GSoC and it is about HTTP::Headers module.

The main goal of this module is to provide functionality for handling HTTP headers. It's simple - that's my starting point.

While writing I tried to keep it similar to Perl 5 module. Headers are represented by a hash - every key-value pair is called a field, key is a name of single header (names are cases insensitive).

Example usage:

    use HTTP::Headers;
    my $h = HTTP::Headers.new(Accept => 'text/plain');

    my $a = $h.header('Accept');             # get
    $h.remove-header('Accept');              # delete
    $h.header(Content-Type => 'text/plain'); # set

    say $h.Str("\r\n");                # print headers as a string

We are able to store multiple values in such fields

    my $h = HTTP::Headers.new(Accept => <text/plain text/html>);

We can also push new values to existing field:

    $h.push-header(Accept => <image/jpeg image/png>);

Why do we need this?

HTTP::Message uses this to store HTTP headers, thus HTTP::{Request, Response} use it too (because they inherit from HTTP::Message).

What do you think about it?

And, as always, feel free to contribute!

Filip Sergot | filip.sergot.pl | 2014-05-20 00:00:00

If you read my blog, you'll likely know what MoarVM is and what it does. For readers who do not, MoarVM is a virtual machine that is designed to execute perl6 efficiently. Like a real computer, a virtual machine provides the following:

  • A 'processor', that is to say, something that reads a file and executes a program. This simulation is complete with registers and an instruction set.
  • An infinite amount of memory, using a garbage collector schema.
  • IO ports, including file and network access.
  • Concurrency (the simulation of an infinite amount of processors via threads)
In this post I'll focus on the 'processor' aspect of MoarVM. MoarVM is a 'register virtual machine'. This means simply that all instructions operate on a limited set of storage locations in which all variables reside. These storage locations are called registers. Every instruction in the bytecode stream contains the address of the memory locations (registers) on which it operates. For example, the MoarVM instruction for adding two integers is called add_i, and it takes three 'operands', one for the source registers to be added together and a third for the destination register to store the result. Many instructions are like that.

A register VM is often contrasted with a stack VM. The Java Virtual Machine is a well-known stack VM, as is the .NET CLR. In a stack VM values are held on an ever growing and shrinking stack. Instructions typically operate only on the top of the stack and do not contain any references to memory addresses. A typical stack VM would add two numbers by popping two of the stack and pushing the result.

Why was the choice for a register VM made? I'm not certain, but I think it likely that it was chosen because register machines are frequently faster in execution. In brief, the trade-off is between instruction size on one hand and total number of instructions needed to execute a given program. Because stack VM instructions do not contain any addresses (their operands are implicitly on the stack), they are smaller and the VM has to spend less time to decode them. However, values frequently have to be copied to the top of the stack in order for the stack machine to operate on them. In contrast, a register machine can just summon the right registers whenever they are required and only rarely has to copy a value. In most VM's, the time spent executing an instruction is much larger than the time spent decoding it, so register VM's are often faster. 

From the point of view of somebody writing a (JIT) compiler (like myself), both architectures are abstractions, and somewhat silly too. All actual silicon processor architectures have only a limited number of registers, yet most 'register' VM's - including MoarVM - happily dole out a new set of registers for every routine. In some cases, such as the Dalvik VM, these registers are explicitly stack-allocated, too! The 'register' abstraction in MoarVM does not translate into the registers of a real machine in any way.

Nonetheless, even for a compiler writer there is a definitive advantage to the register VM architecture. To the compiler, MoarVM's instructions are input, that is to be transformed into native instructions. The register VM's instructions are in this sense very similar to something called Three Address Code. (Actually, some MoarVM instructions take more than three operands, but I'll get to that in a later post). A very convenient property of TAC and MoarVM instructions alike is that every variable already has its own memory location. In contrast, in a stack VM the same variable may have many copies on the stack. This is inconvenient for efficient code generation for two reasons. 

First of all, naively copying values as they would be in the stack VM will lead to inefficient code. It may not be obvious which copies are necessary and which are redundant. Nor is it immediately obvious how much run-time memory compiled routine would use. To efficiently compile stack VM code a compiler might do best to translate it into Three Address Code first.

But the second reason is perhaps more profound. Modern JIT compilers use a technique called type feedback compilation. Briefly, the idea is that a compiler that is integrated into the runtime of the system can exploit information on how the program is actually used to compile more efficient code than would be possible on the basis of the program source code alone. A simple example in javascript would be the following routine:

function foo(a) {
var r = 0;
for (var i = 1; i < a.length; i++) {
r += (a[i] * a[i-1]);
}
return r;
}
    
foo([1,2,3,4,5,6]);

If all calls to foo happen to have a single argument consisting of an array of integers, the semantics of this routine become much simpler than they are otherwise. (For example, in javascript, the addition of a number and a string produces a well-defined result, so it is totally valid to call foo with an array of strings). A type-feedback compiler might notice a large number of calls to foo, all with integer arrays as their sole argument, assume this will always be so, and compile a much faster routine. In order to correctly handle arrays of strings too, the compiler inserts a 'guard clause' that checks if a is really an array of integers. If not, the routine must be 'de-optimised'.  Note that spesh, which is the optimisation framework for MoarVM, also works this way

The goal of de-optimisation is to resume the execution of the interpreted (slow) routine where the assumptions of the compiled routine have failed. A typical place in our 'foo' function would be on entry or on the addition to r. The idea is that the values that are calculated in the optimised routine are copied to the locations of the values of the interpreted routine. In a register machine, this is conceptually simple because all variables already have a fixed locationHowever, the layout of the stack in a stack vm is dynamic and changes with the execution of the routine, and mapping between compiled and interpreted values may not be very simple at all. It is certainly doable - after all, the JVM famously has an efficient optimising JIT compiler - but not simple.

And in my opinion, simplicity wins.

Bart Wiegmans | brrt to the future | 2014-05-18 11:34:14

It’s been a while since I published RetroRacer, but a lot of new things happened in Steroids too! So many things that I backported the old games to the new engine; I’ll be testing them each time to see if I’m introducing any breaking changes. But! back to Space Invaders.

Image

(brave ship fighting off alien hordes)

 

The game is now available at https://github.com/tadzik/steroids. Below I’ll outline some of the new features in the engine itself.

Animations

It is now possible to load animations from spritesheets (example here), and tell Steroids to animate them over time.

self.load_spritesheet(‘invader’, ‘assets/invader.png’, 72, 32, 7);

my $invader = self.add_sprite(‘invader’, $x, $y);

self.add_animation($invader, Any, 200, True);

In order: load a spritesheet of seven 72×32 images, put it on screen an animate all its frames (Any), changing a frame every 200 miliseconds, and play it in a loop (True). The ships will rotate and look nice :)

Gamepad support

method update($dt) {
    my $pad = self.gamepads[0];
    my $analog = $pad.analog_percentage($pad.analog_left_x);

    if self.is_pressed(“Left”) or $pad.dpad_position(“Left”) {
        $!player.x -= 15;
    } elsif self.is_pressed(“Right”) or $pad.dpad_position(“Right”) {
        $!player.x += 15;
    } elsif $analog.abs > 0.1 {
        $!player.x += Int(15 * $analog);
    }

    …

}

    

New steroids features gamepad support! At this point the only supported one is the Xbox controller (I accidentally used the old SDL joystick API instead of a new, shiny gamecontroller API), so it’s all a little bit experimental. But, as you can see, it works pretty well and is quite useful indeed!

Game states

class Main is Steroids::State {

    …

}

 

class Menu is Steroids::State {

    …

}

 

given Steroids::Game.new {
    .add_state(‘menu’, { Menu.new });
    .add_state(‘main’, { Main.new });
    .change_state(‘menu’);
    .start;
}

What’s going on here? We have to separate game states (one for the menu and one for the actual game), and we can switch between them at any point using the change_state() method. For example, somewhere in Menu’s code:

method keypressed($k) {
    if $k eq ‘S’ {
        self.reset_state(‘main’);
        self.change_state(‘main’);
    }

    …

}

The states themselves are passed in as code references for the sake of the reset_state() method shown above. You can think of them as factories. The reset above is necessary, so each time you start a new game, it actually starts anew instead of continuing the old one (which is probably either lost or won by that time).

I probably forgot about something, so if anything is unclear just write it in the comment section. Go try out Space Invaders, and don’t forget about the soundtrack!

I’ll be talking about Steroids next weekend at this year’s Polish Perl Workshop; make sure to stop to find out about the latest developments and future plans.


Tadeusz Sosnierz (tadzik) | Whatever but Cool Perl | 2014-05-11 18:26:00

As part of my ‘community bonding’ period, I’ve taken it upon me to write a small series of blog posts explaining the various parts I’ll be using to add a JIT compiler to MoarVM. Today I’d like to focus on the DynASM project that originates from the awesome LuaJITproject.

DynASM is probably best described as an run-time assembler in two parts. One part is written in lua and acts as a source preprocessor. It takes a C source file in which special directives are placed that take the form of assembly-language statements. Here is a fully worked-out example. These are then transformed into run-time calls that construct the desired bytecode. The generated bytecode can be called like you would a regular function pointer.

DynASM has no run-time dependencies. But to run the preprocessor you will need lua as well as the Lua BitOp module (also from the luajit project). The run-time part is contained within the headers. DynASM is licensed under the MIT license and supports many different architectures, including x86, x64, ppc, and arm. DynASM also intergrates neatly into a Makefile-based build.

In many respects DynASM is an ideal tool for this particular job. However, it also has a few drawbacks. The most important of these is the lack of documentation. With the exception of a few scattered blog posts , there is barely any documentation at all. For many of the simple operations, this is sufficient. For the more complex things, such as dynamic register selection, or dynamic labels, it seems there is no other option than to ask directly. (FWIW, the 'dynamic registers' question was asked an answered only two days ago on the luajit mailing list). However, I think the benefits of using DynASM outwheigh these issues.

For my next blog, I'll be looking at the MoarVM machine model and bytecode set, especially in relation to x64. Hope to see you then.

Bart Wiegmans | brrt to the future | 2014-05-11 04:58:47

A useful, usable, “early adopter” distribution of Perl 6

On behalf of the Rakudo and Perl 6 development teams, I’m happy to announce the April 2014 release of “Rakudo Star”, a useful and usable distribution of Perl 6. The tarball for the April 2014 release is available from http://rakudo.org/downloads/star/. A Windows .MSI version of Rakudo star will usually appear in the downloads area shortly after the tarball release.

This is the first Rakudo Star release with support for the MoarVM backend (all module tests pass on supported platforms) along with experimental support for the JVM backend (some module tests fail).

In the Perl 6 world, we make a distinction between the language (“Perl 6″) and specific implementations of the language such as “Rakudo Perl”. This Star release includes release 2014.04 of the Rakudo Perl 6 compiler, version 6.1.0 of the Parrot Virtual Machine, version 2014.04 of MoarVM, plus various modules, documentation, and other resources collected from the Perl 6 community.

Some of the new features added to this release include:

  • experimental support for the JVM and MoarVM backends
  • NativeCall passes all its tests on all backends
  • S17 (concurrency) now in MoarVM (except timing related features)
  • winner { more @channels { … } } now works
  • implemented univals(), .unival and .univals (on MoarVM)
  • added .minpairs/.maxpairs on (Set|Bag|Mix)Hash
  • Naive implementation of “is cached” trait on Routines

There are some key features of Perl 6 that Rakudo Star does not yet handle appropriately, although they will appear in upcoming releases. Some of the not-quite-there features include:

  • advanced macros
  • threads and concurrency (in work for the JVM and MoarVM backend)
  • Unicode strings at levels other than codepoints
  • interactive readline that understands Unicode
  • non-blocking I/O
  • much of Synopsis 9 and 11

There is an online resource at http://perl6.org/compilers/features that lists the known implemented and missing features of Rakudo and other Perl 6 implementations.

In many places we’ve tried to make Rakudo smart enough to inform the programmer that a given feature isn’t implemented, but there are many that we’ve missed. Bug reports about missing and broken features are welcomed at rakudobug@perl.org.

See http://perl6.org/ for links to much more information about Perl 6, including documentation, example code, tutorials, reference materials, specification documents, and other supporting resources. A draft of a Perl 6 book is available as docs/UsingPerl6-draft.pdf in the release tarball.

The development team thanks all of the contributors and sponsors for making Rakudo Star possible. If you would like to contribute, see http://rakudo.org/how-to-help, ask on the perl6-compiler@perl.org mailing list, or join us on IRC #perl6 on freenode.

rakudo.org | rakudo.org | 2014-05-05 17:18:19

(My talk at TEDxTaipei at 2014-04-27, before a panel with Linda Liukas, Matz and Charles Nutter. Slides in Chinese. 逐字稿中文版.)


Thanks, Linda, for sharing your fascinating story.

As my talk is about "Programming Languages and RailsGirls.tw", I'd like to start with a few stories of programming languages.

As we know, Rails is built on the Ruby language. Matz created Ruby by blending his five favorite languages together: Ada, Eiffel, Lisp, Perl, and Smalltalk.

I cannot cover all of them in a 20-minute talk, so let us start with Ada. Ada comes first in this list not only because its name starts with an "A", but also because it was named after Ada Lovelace, the world's first computer programmer.

In 1842, Ada wrote this program for the Analytical Engine, the first general-purpose computer ever designed but not constructed until a century later. Ada was also the first to realize that computers are not limited to work with numbers; she envisioned that people would compose music and create art on a computer.

Ada's mother was Annabella, a gifted scholar of mathematics. Ada's father, the great Romantic poet Byron, nicknamed his wife the "princess of parallelograms" because of her strict morality with a mathematical rigor.

And indeed, the art of computer programming is a blend of mathematics and poetry. Like a mathematical formula, good programs are rigorous and correct. Programmers, however, work like poets — we are creative with our languages, we convey a sense of purpose in a concise way, and we inspire each other to carry on our work.

As Professor Dijkstra put it: "Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer."

Both mathematicians and poets require a coherent vision to guide their work. The same principle applies to professional programming: Without a coherent vision and design integrity, sloppy programs quickly become unmaintainable, such that any attempts to fix a bug will introduce more bugs.

However, professional programming is not the only kind of programming, or even the most popular one. For nearly twenty years, the most well-known language on the web has been JavaScript, a "scripting language" that's easy to start with, but that also makes it very easy to write sloppy programs with a lot of bugs.

The distinction between scripting and programming languages dates back to the 1970s, with the introduction of the C language, a portable language that runs on almost any computer. Computer scientists in Bell Labs wrote hundreds of programs in C that worked together as a complex operating system, and they called it Unix.

Users of the Unix system were not expected to program in C. Instead they wrote "shell scripts" that were simple to write — mostly just a list of commands — but very difficult to maintain once they got complex.

Throughout the 1980s, the worldview was that there were programs written in complex and powerful languages like Objective-C and C++; and there were scripts written in simple but limited languages like sed and AWK.

The picture here is a linear spectrum with nothing in-between. If a script became too complex to maintain, people would just re-write it in a "real" programming language like C++.

In 1987, Larry Wall said, "We can open up this spectrum and turn it into a space with two dimensions." He saw C's strength as "Manipulexity", the ability to manipulate complexity, while shell scripts excel at "Whipuptitude", the ability to whip things up quickly.

Perl was hatched in this newfound space, as a language that could do a little bit of both, and one that evolves by redefining its own vocabulary. Over time, Perl evolved to be better at Whipuptitude than any shell scripts, and as good as C++ and Java at Manipulexity for all but the most complex programs.

With Perl, one could start with a sloppy script and, through "refactoring" techniques, gradually make it more rigorous and correct over time, without having to switch to a different language.

In the 1990s, a new generation of Perl-influenced languages appeared, such as  Python, PHP, and Ruby. Each of them improved upon Perl toward their own domains; I consider Ruby the most flexible of the three.

In 2005, the Rails project combined Ruby on the server side and JavaScript on the client side into a full-stack web framework. For many people working with C++ or Java, Rails showed them for the first time that "scripting" languages can build web programs that are more complex, and of larger scale, than contemporary "programming" languages could.

Rails succeeded in part because of its use of meta-programming, which provided way to program the Ruby language itself into domain-specific languages such as ActiveRecord.

Since that time, popular frameworks such as jQuery and AngularJS have taken the same approach to JavaScript, allowing programmers to express our vision and design integrity with a re-programmed language that's much more rigorous and safe.

In the 2010s, Rails adopted CoffeeScript, a Ruby-like language that compiles into "the good parts" of JavaScript, to complement its use of the jQuery framework. This is an extension of the meta-programming idea — changing a language by keeping the best parts of it.

People in the Perl community took CoffeeScript to create the Coco language, and people in the Haskell community took Coco to create LiveScript. Nowadays, most of my programming is done in LiveScript, which allows me to express the same vision in a way that looks like Ruby, or looks like Perl, or looks like Haskell, whichever way that's most appropriate for the poem, er, program.

So those are my stories about Rails and programming languages. For the next half of my talk, I'd like to talk about the "Girls" part in Rails Girls.

In the first half of the 20th century, people working for women's rights have achieved a lot of legal victories, bringing equality in rights of voting, of education, of individual economy, of marriage and divorce to many people in the world.

However, this equality in law does not readily translate to equality in practice. As Simone de Beauvoir observed in 1949, many societies make women feel inferior not by law, but through the act of "Othering" in languages and in actions. Men are presumed as the default subject, and women are constantly reminded that they are the collective "Other" by the way they are treated, as a group different from the default.

In the 1970s, social workers and thinkers applied Simone's thoughts and observed various socially-constructed expectations known as gender roles. For example, a particular society may confine women into one of two primary roles: either as a Girl — an adorable object of desire, harmless and of inferior status; or as a Mother — a caretaker, provider of emotional support, and a reproductive agent.

What's missing in this picture is, of course, the various destinies that each of us wish upon ourselves. We encounter social pressure whenever we happen to contradict one of the expected roles.

We can fix this problem by adopting the vision: That Biology should not determine Destiny. In practical terms, it is helpful to re-introduce the concepts of "scripts" and "programs", this time from the field of social studies.

Larry Wall said this in his 2007 talk on scripting languages: "Suppose you went back to Ada Lovelace and asked her the difference between a script and a program. She'd probably look at you funny, then say something like: 'Well, a script is what you give the actors, but a program is what you give the audience.' That Ada was one sharp lady..."

Here we see social "scripts" are actions expected of people to act according to their roles. In contrast, a "program" informs participants what to expect from the social "norm", but does not dictate people's behaviors the way scripts do.

As a concrete example, when I began my IT career as the webmaster of a small publishing house "The Informationist" in 1994, I worked both online via a BBS and in the office. Many of our staffs were openly LGBTQ and LGBTQ-supporting; it was a safe space for me to explore my gender expressions.

The press turned into a software company named "Inforian" in 1995, when I became its CTO, and started participating in the global Free Software community. While Taiwan's software sector at that time was generally gender-balanced, it shocked me to find that male-dominant scripts were prevalent in online Free Software communities.

After a while, I learned that many women on forums and chatrooms used male-sounding nicknames, not because it was their preferred gender expression, but as a protection against harassment. This was obviously a problem.

In 1998, the Open Source movement started and I helped run a few startups in the Silicon Valley, China, and Taiwan. As I started attending conferences and giving talks, I couldn't help but notice the lack of variety in gender expressions and in ethnic distribution.

For example, I heard the question "are you here with your boyfriend?" asked many times in these conferences, but not once "are you here with your girlfriend?" or "are you here with your partner?" — it was clearly a social script to make the recipient feel identified as an "other" — an outsider instead of a participant in the space.

After I returned to Taiwan to work on local open culture communities, I started consciously using the feminine pronoun in all my Chinese online writings, in an attempt to turn around the language's "othering" aspect.

When we started organizing our own conferences in 2003, I also took efforts to invite only the most socially compassionate speakers from abroad, who helped establish a more relaxed atmosphere where people can enjoy a safe space.

However, as Open Source gained commercial popularity, sexualized practices of IT industries' trade shows started to affect our conferences as well. One of these practices is promotional models, hired to drive interests to a vendor's booth; another is offensive imagery in conference contents, including from prominent speakers in both Free Software and Open Source communities.

In 2009, Skud, a long-time fellow hacker in the Perl community, started to speak widely at conferences on this subject. She created "Geek Feminism", a wiki-and-blog platform to list the issues and work together to improve them.

After a year's work, participants in the wiki created a "Code of Conduct" template, a social "program" that sets the expected norms. Valerie Aurora and Mary Gardiner, two Geek Feminism contributors from the Linux community, co-founded the Ada Initiative in 2011, so they can work full-time to support women in open technology and culture.

With help from many contributors, the Ada Initiative worked with over 100 conference organizers to adopt the code of conduct program. I'm very glad to see the upcoming "Rails Girls Summer of Code" event among the list of adopters.

There are three main elements of such a program:

  • Specific descriptions of common but unacceptable behavior (sexist jokes, etc.)
  • Reporting instructions with contact information
  • Information about how such policies are enforced

Together, they ensure a space where people can be aware of their own social scripts and their effects on each other and refactor them into a more sustainable community with openness and variety as a coherent vision.

There are many more activities from the Ada Initiative, and we have a list of resources and communities on the Geek Feminism wiki, which I'd like to invite you to visit.

To me, the most enlightening bit is perhaps not in the code itself, but in its programming — the fine-tuning of a conduct that fits best with the local culture.

When we create a safe space for a community's participants, to observe and decide our own social scripts, we can collectively program a social norm that is both rigorous and creative — just like the best formulas, poems, and programs.

In conclusion, I'd like to share two poetic fragments of mine with you:

    I would like to know you
        not by your types,
            classes or roles —
    — but by your values.

...and:

    Saying "Life is what we make it to be",
        is like saying "Language is what we make it to be" —
            True, but not at once;
                — just one bit at a time.

Thank you.

Audrey Tang | Pugs | 2014-04-28 10:51:37

Hi everybody, welcome to the first post. Here on this blog I will write about developing a just-in-time compiler for MoarVM. And perhaps many other things, but the JIT compiler comes first.

What is a JIT compiler? It is the not-so-magical component of an interpreter or virtual machine that takes a piece of interpreted code and makes machine code out of it. There are lots of ways to do that and I'll get to more detail in further posts. For now, I'd like to stress that I'll be working together with the awesome Jonathan Worthington and Timo Paulssen. And I'm really excited for this summer!

Bart Wiegmans | brrt to the future | 2014-04-21 13:34:32

(I’m really sorry for the name; I couldn’t think of anything better :))

Image

 

This game, apart from (obviously) being a showcase for a new Steroids iteration, is all about switching lanes on a high traffic road in a fast car. Yay!

It’s really no rocket science compared to ThroughTheWindow from the last post – even code even looks similar. One obvious improvement (beside finally using proper PNGs instead of silly BMPs – timotimo++!) is a built-in collision detection:

my $s self.add_sprite(‘othercar’$_0);

# …

$s.when({ $_.collides_with($!player}, {

    # …

});

No more cheating with collisions like I did with ThroughTheWindow. The existing solution uses the entire image sprite as a hitbox; I’m hoping to make it customizable one day (it’s a simple thing really, code-wise).

All in all, the game isn’t all that much more sophisticated than the last one; I was really just looking for a good excuse to write a new game (and add some new stuff to Steroids), and I sort of came up with a nice theme to follow: ThroughTheWindow used just one key (spacebar), so the next step was to use two (thus RetroRacer) uses left and right arrow keys. What will the next game use? 3 keys? 4 keys? Is it an arithmetical or geometrical series? Oh my, I can’t wait to find out myself.

Now go and grab it at https://github.com/tadzik/RetroRacer, and don’t forget about the soundtrack!

 


Tadeusz Sosnierz (tadzik) | Whatever but Cool Perl | 2014-04-20 22:23:11

On behalf of the Parrot team, I'm proud to announce the supported
release Parrot 6.3.0,
also known as "Black-cheeked Lovebird". Parrot (http://parrot.org/)
is a virtual machine
aimed at running all dynamic languages.

Parrot 6.3.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/supported/6.3.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.3.0 News:
- Tests
+ Fixed tests for cygwin and cygwin64
+ Added 2 new examples/benchmarks/ files and benchmarks/run.sh
+ Fixed socket tests without IPv6 support at all [GH #1068]
- Community
+ New Benchmark results at https://github.com/parrot/parrot-bench
for all releases from 1.8.0 - 6.2.0


The SHA256 message digests for the downloadable tarballs are:
42aa409fa82d827019ebd218e8f9501b50e04ee81f3ccf705e03f59611317a1b
parrot-6.3.0.tar.gz
8d64df21751770741dac263e621275f04ce7493db6f519e3f4886a085161a80d
parrot-6.3.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Our next scheduled release is 20 May 2014.

Enjoy!
--
Reini Urban
http://cpanel.net/ http://www.perl-compiler.org/

Perl 6 Announce | perl.perl6.announce | 2014-04-19 06:26:59

In the Perl world I’m mostly known as a guy who hacks on Perl 6 stuff. Less known is that outside of the Perl world, I spend a lot of my time with the .Net platform. C#, despite a rather uninspiring initial couple of releases, has escaped Java-think and grown into a real multi-paradigm language. It’s not Perl, but it’s certainly not unpleasant, and can even be a good bit of fun to work with nowadays. My work with it right now typically involves teaching, along with various mentoring and trouble-shooting tasks.

The Windows world has always been rather into threads – at least as long as I’ve been around. .Net is also, as a result. Want to do some computation in your GUI app? Well, better farm it off to a thread, so the main thread can keep the UI responsive to the user’s needs. Want to do network I/O? Well, that could probably use some asynchronous programming – and the completion handler will be run on some thread or other. Then the results will probably want marshaling somehow. (That used to hurt; now things are better.) Building a web application? Better learn to love threads. You don’t actually get any choice in the matter: having multiple request-processing threads is the unspoken, unquestioned, default in a web application on .Net.

Of course, just because threads are almost ubiquitous doesn’t mean the average developer – or even the above-average developer – gets things right. A bunch of my recent trouble-shooting gigs have boiled down to dealing with a lack of understanding of multi-threaded programming. “So, we embed this 80s library in our web application, but things tend to crash under load.” “How do you deal with the fact that 80s library likely isn’t threadsafe?” “It…what?” “Oh, my…”

So anyway, back to Perl 6. Last year, we managed to get Rakudo on the JVM. And, given we now ran on a VM where folks deploy heavily threaded software every day, and with no particular progress to be seen on concurrency in Perl 6 for years, I did what I usually seem to end up doing: get fed up of the way things are and figured I should try to make them better. Having spent a bunch of years working with and teaching about parallel, asynchronous, and concurrent programming outside of the Perl world, it was time for worlds to collide.

And oh hell, collide they do. Let’s go word counting:

my %word_counts;
for @files -> $filename {
    for slurp($filename).words {
         %word_counts{$_}++;
    }
}

OK, so that’s the sequential implementation. But how about one that processes the files in parallel? Well, it seems there are a bunch of files, and that seems like a natural way to parallelize the work. So, here goes:

my %word_counts;
await do for @files -> $filename {
    start {
        for slurp($filename).words {
            %word_counts{$_}++;
        }
    }
}

Here, start creates a Promise, which is kept when the code inside of it completes. That work is scheduled to be done on the thread pool, and the code calling start continues onward, moving on to create another Promise for the next file. Soon enough, the thread pool’s input queue is nicely occupied with work, and threads are chugging through it. The loop is in a context that means it produces results – the Promise objects – thanks to our use of the do keyword. We give them to await, which waits for them all to get done. Perfect, right?

Well, not so fast. First of all, are hashes thread safe? That is, if I try to write to a hash from multiple threads, what will happen? Well, good question. And the answer, if you try this out on Rakudo on JVM today, is you’ll end up with a hosed hash, in all likelihood. OK. Go on. Say what you’re thinking. Here’s one guess at a common response: “But…but…but…OH NO WE MUST MAKE IT WORK, this is Perl, not Java, dammit!” Well, OK, OK, let’s try to make it work…

So the hash ain’t threadsafe. Let’s go put implicit locking in hash access. We’ll slow down everything for it, but maybe with biased locking it won’t be so bad. Maybe we can build a smart JIT that invalidates the JITted code when you start a thread. Maybe escape analysis will save the common cases, because we can prove that we’ll never share things. Maybe we can combine escape analysis and trace JIT! (Hey, anybody know if there’s a paper on that?) Heck, we gotta build smart optimizations to make Perl 6 perform anyway…

So anyway, a patch or two later and our hashes are now nicely thread safe. We’re good, right? Well, let’s run it and…ohhhh…wrong answer. Grr. Tssk. Why, oh why? Well, look at this:

%word_counts{$_}++;

What does the post-increment operator do? It reads a value out of a scalar, gets its successor, and shoves the result in the scalar. Two threads enter. Both read a 41. Both add 1. Both store 42. D’oh. So, how do we fix this? Hm. Well, maybe we could make ++ take a lock on the scalar. Now we’re really, really going to need some good optimization, if we ever want tight loops doing ++ to perform. Like, inlining and then lifting locks…if we can get away with it semantically. Or one of the tricks mentioned earlier. Anyway, let’s suppose we do it. Hmm. for good measure, maybe we’d better ponder some related cases.

%word_counts{$_} += 1;

Not idiomatic here, of course, but we can easily imagine other scenarios where we want something like this. So, we’d better make all the assignment meta-ops lock the target too…uh…and hold the lock during the invocation of the + operator. Heck, maybe we can not do locks in the spin-lock or mutex sense, but go with optimistic concurrency control, given + is pure and we can always retry it if it fails. So, fine, that’s the auto-increment and the assignment meta-ops sorted. But wait…what about this:

%word_counts{$_} = %word_counts{$_} + 1;

Well, uhh…darn. I dunno. Maybe we can figure something out here, because having that behave differently than the += case feels really darn weird. But let’s not get bogged down with side-problems, let’s get back to our original one. My hash is thread safe! My ++ is atomic, by locks, or some other technique. We’re good now, aren’t we?

Nope, still not. Why? Turns out, there’s a second data race on this line:

%word_counts{$_}++;

Why does this work when we never saw the word before? Auto-vivification, of course. We go to look up the current scalar to auto-increment it. But it doesn’t exist. So we create one, but we can’t install it unless we know it will be assigned; just looking for a key shouldn’t make it come to exist. So we put off the installation of the scalar in the hash until it’s assigned. So, two threads come upon the word “manatee”. Both go and ask the hash for the scalar under that key. Access to the hash is already protected, so the requests are serialized (in the one-after-the-other sense). The hash each time notices that there’s no scalar in that slot. It makes one, attached to it the fact that it should be stored into the hash if the scalar is assigned to, and hands it back. The ++ sees the undefined value in the scalar, and sticks a 1 in there. The assignment causes the scalar to be bound to the hash…uh…wait, that’s two scalars. We made two. So, we lose a word count. Manatees may end up appearing a little less popular than dugongs as a result.

hue-manatee

How do we fix this one? Well, that’s kinda tricky. At first, we might wonder if it’s not possible to just hold some lock on something for the whole line. But…how do we figure that out? Trying to work out a locking scheme for the general case of auto-viv – once we mix it with binding – feels really quite terrifying, as this REPL session reveals:

> my %animals; my $gerenuk := %animals<gerenuk>; say %animals.perl;
().hash
> $gerenuk = 'huh, what is one of those?'; say %animals.perl;
("gerenuk" => "huh, what is one of those?").hash

So, what’s my point in all of this? Simply, that locking is not just about thread safety, but also about the notion of transaction scope. Trying to implicitly lock stuff to ensure safe mutation on behalf of the programmer means you’ll achieve thread safety at a micro level. However, it’s very, very unlikely that will overlap with the unspoken and uncommunicated transaction scope the programmer had in mind – or didn’t even know they needed to have in mind. What achieving safety at the micro level will most certainly achieve, however, is increasing the time it takes for the programmer to discover the real problems in their program. If anything, we want such inevitably unreliable programs to reliably fail, not reliably pretend to work.

I got curious and googled for transaction scope inference, wondering if there is a body of work out there on trying to automatically figure these things out. My conclusion is that either it’s called something else, I’m crap at Google today, or I just created a thankless PhD topic for somebody. (If I did: I’m sorry. Really. :-) My hunch is that the latter is probably the case, though. Consider this one:

while @stuff {
    my $work = @stuff.pop;
    ...
}

Where should the implicit transaction go here? Well, it should take in the boolification of @stuff and the call to pop. So any such general analysis is clearly inter-statement, except that we don’t want to hard-code it for boolification and popping, so it’s interprocedural, but then method calls are late-bound, so it’s undecidable. Heck, it’d be that way even in boring Java. With Perl you can go meta-programming, and then even your method dispatch algorithm might be late bound.

At this point, we might ponder software transactional memory. That’s very much on-topic, and only serves to re-inforce my point: in STM, you’re given a mechanism to define your transaction scope:

my %word_counts;
await do for @files -> $filename {
    start {
        for slurp($filename).words {
            # THE FOLLOWING IS IMAGINARY SYNTAX. No, I can't 
            # hack you up a prototype down the pub, it's *hard*!
            atomic { %word_counts{$_}++ }
        }
    }
}

This looks very nice, but let’s talk about the hardware for a bit.

Yes, the hardware. The shiny multi-core thing we’re trying to exploit in all of this. The thing that really, really, really, hates on code that writes to shared memory. How so? It all comes down to caches. To make this concrete, we’ll consider the Intel i7. I’m going to handwave like mad, because I’m tired and my beer’s nearly finished, but if you want the gory details see this PDF. Each core has an Level 1 cache (actually, two: one for instructions and one for data). If the data we need is in it, great: we stall for just 4 cycles to get hold of it. The L1 cache is fast, but also kinda small (generally, memory that is fast needs more transistors per byte we store, meaning you can’t have that much of it). The second level cache – also per core – is larger. It’s a bit slower, but not too bad; you’ll wait about 10 cycles for it to give you the data. (Aside: modern performance programming is thus more about cache efficiency than it is about instruction count.) There’s then a
level 3 cache, which is shared between the cores. And here’s where things get really interesting.

As a baseline, a hit in the level 3 cache is around 40 cycles if the memory is unshared between cores. Let’s imagine I’m a CPU core wanting to write to memory at 0xDEADBEEF. I need to get exclusive access to that bit of memory in order to do so. That means before I can safely write it, I need to make sure that any other core with it in its caches (L1/L2) tosses what it knows, because that will be outdated after my write. If some other core shares it, the cost of obtaining the cache line from L3 goes up to around 65 cycles. But what if the other core has modified it? Then it’s around 75 cycles. From this, we can see that pretty much any write to shared memory, if another core was last to write, is going to be incurring a cost of around 75 cycles. Compare that to just several cycles for unshared memory.

So how does our approach to parallelizing our word count look in the light of this? Let’s take a look at it again:

my %word_counts;
await do for @files -> $filename {
    start {
        for slurp($filename).words {
            %word_counts{$_}++;
        }
    }
}

Locks are just memory, so if we inserted those automatically – even if we did work out a good way to do so – then taking the lock is a shared memory write. That’s before we go updating the memory associated with the hash table to install entries, and the memory of the scalars to update the counts. What if we STM it? Even if we keep modifications in a local modification buffer, we still have to commit at some point, and that’s going to have to be a write to shared memory. In fact, that’s the thing that bothers me about STM. It’s a really, really great mechanism – way superior to locks, composable, and I imagine not too hard to teach – but its reason for existing is to make writes to shared memory happen in a safe, transactional, way. And its those writes that the hardware makes costly. Anyway, I’m getting side-tracked again. The real point is that our naive parallelization of our program – even if we can find ways to make it work reliably – is a disaster when considered in the light of how the hardware works.

So, what to do? Here’s an alternative.

# Produce a word counts hash per file - totally unshared!
my @all_counts = await do for @files -> $filename {
    start {
        my %word_counts;
        for slurp($filename).words {
            %word_counts{$_}++;
        }
        %word_counts
    }
}

# Bring them together into a single result.
my %totals;
for @all_counts {
    %totals{.key} += .value;
}
say %totals.elems;

Those familiar with map-reduce will probably have already recognized the pattern here. The first part of the program does the work for each file, producing its own word count hash (the map). This is completely thread local. Afterwards, we bring all of the results together into a single hash (the reduce). This is doing reads of data written by another thread, of course. But that’s the cheaper case, and once we get hold of the cache lines with with the hash and scalars, and start to chug through it, we’re not going to be competing for it with anything else.

Of course, the program we get at the end is a bit longer. However, it’s also not hard to imagine having some built-ins that make patterns like this shorter to get in place. In fact, I think that’s where we need to be expending effort in the Perl 6 concurrency work. Yes, we need to harden MoarVM so that you can’t segfault it even if you do bad things. Yes, we should write a module that introduces a monitor keyword, which is a class that automatically takes a lock around each of its method calls:

monitor ThreadSafeLoggingThingy {
    has @!log;

    method log($msg) {
        push @!log, $msg;
    }

    method latest($n) {
        $n < @!log
            ?? @!log[*-$n .. *]
            !! @!log[]
    }
}

Yes, we should do an Actor one too. We could even provide a trait:

my @a is monitor;

Which would take @a and wrap it up in a monitor that locks and delegates all its calls to the underlying array. However, by this point, we’re treading dangerously close to forgetting the importance of transaction scope. At the start of the post, I told the story of the hopelessly unsafe calls to a legacy library from a multi-threaded web application. I had it hunted down and fixed in a morning because it exploded, loud and clear, once I started subjecting it to load tests. Tools to help find such bugs exist. By contrast, having to hunt bugs in code that is threadsafe, non-explosive, but subtly wrong in the placing of its transaction boundaries, is typically long and drawn out – and where automated tools can help less.

In closing, we most certainly should take the time to offer newbie-friendly concurrent, parallel, and asynchronous programming experiences in Perl 6. However, I feel that needs to be done by guiding folks towards safe, teachable, understandable patterns of a CSP (Communicating Sequential Processes) nature. Perl may be about Doing The Right Thing, and Doing What I Mean. But nobody means their programs to do what the hardware hates, and the right thing isn’t to make broken things sort-of-work by sweeping complex decisions on transaction scope under the carpet. “I did this thing I thought was obvious and it just blew up,” can be answered with, “here’s a nice tutorial on how to do it right; ask if you need help.” By contrast, “your language’s magic to make stuff appear to work just wasted days of my life” is a sure-fire way to get a bad reputation among the competent. And that’s the last thing we want.


Jonathan Worthington | 6guts | 2014-04-17 00:09:34

It’s been a while since I wrote an update here. Happily, timotimo has taken up the role of being our weekly Perl 6 reporter, so there’s a good place to follow for regular updates. However, I wanted to take a moment to provide the bigger picture of what’s been happening in the last couple of months, share my perspectives on it, and talk a bit about where things are headed from here.

Optimization, optimization, optimization!

A lot of recent effort has gone on optimization. NQP, the subset of Perl 6 that we use to implement much of the Rakudo Perl 6 compiler, has performance on MoarVM that starts to approach that of Perl 5, and on JVM sometimes exceeds that of Perl 5 for longer running things (it typically runs the forest fire benchmark from our benchmark suite faster once the JIT has had time to get going, for example). By contrast, Rakudo’s performance itself has been comparatively awful. Thankfully, things have been getting better, as we’ve worked to improve optimization, reduce costs of common things, and gradually begun to close the gap. This has involved work on both Rakudo and NQP’s optimization phases, along with work on improving the built-ins and some performance-oriented factoring changes. There’s still plenty of work to go, but anybody using Rakudo on MoarVM will likely feel the results in the next release. To give an idea of the improvement in HEAD Rakudo on MoarVM, which will appear in the April monthly:

  • Array and hash access is more than 3 times faster
  • Most multi-dispatches are now enormously cheaper
  • Many, many unrequired scalar allocations are optimized away
  • The forest fire benchmark on Rakudo can render twice as many frames per second
  • Compiler performance is around 15% better (estimate going on CORE.setting compile time)

Compared to Rakudo on Parrot the difference is more marked. On compiler performance alone, the difference is enormous: you can build the entire of Rakudo on MoarVM on my box in around 80s (which I’m not happy with yet, though you rarely change so much that you have to do the whole thing). That is less time than it takes for Rakudo on Parrot to complete the parse/AST phases of compiling CORE.setting (the built-ins). Running the spectest suite happens in half the time. Both of these times are important because they influence how quickly those of us working on Perl 6 implementation can try out our changes. Unsurprisingly, most of us do the majority of our development on MoarVM first these days.

One consequence of this work is that Rakudo on MoarVM is often sneaking ahead of Rakudo on JVM on some benchmarks now, even once the mighty JVM JIT kicks in. This won’t last long, though; a couple of the optimizations done will not be a great deal of work to port to the JVM, and then it can re-claim its crown. For now! :-)

Introducing “spesh”

A whole other level of performance related work has been taking place in MoarVM itself. The first goal for the project was “just run NQP and Perl 6″, and the VM simply got on with interpreting the bytecode we threw at it. That doesn’t mean it wasn’t carefully designed along the way – just that the focus in terms of execution was to be simple, correct and sufficiently complete to serve as a solid Rakudo backend. With that goal achieved, the next phase of the project is underway: implementing dynamic optimization based on program information available at runtime, speculative optimizations that can be undone if things they rely on are broken, and so forth.

The first steps in that direction will be included in this month’s MoarVM release, and are to thank for much of the improved compiler performance (since the compiler is a program running on the VM too). The easiest way to get an overview is for me to walk you through the pieces in src/spesh in MoarVM.

  • graph is about building a representation of the bytecode (at a frame level) suitable for analysis and transformation (the two steps involved in optimization). It starts out by building a Control Flow Graph. It then computes the dominance tree, which it uses to rename variables so as to produce a Static Single Assignment form of the bytecode. This is a form whereby a given name is only written to once, which eases many, many aspects of analysis.
  • args takes a tuple of incoming arguments, considers their types, arity, and so forth. It produces a set of guard clauses that indicate when a given specialization of the code applies (that is, a version of the code improved by making assumptions about what was passed), and then re-writes various argument access instructions to “unsafe” but fast ones that it can prove will always work out.
  • facts takes the graph, looks through it for sources of type information (including the incoming arguments) and does an initial propagation of that information through the graph. It creates usage counts to be later used in dead code elimination.
  • optimize takes this annotated graph, and makes a pass through it, applying a number of optimizations. Granted, there are not so many yet; so far we’ve mostly worked on getting to the point of having a framework to prove safety of transformations, and adding more of them comes next. However, those there so far can do things like:
    • Resolve methods to avoid hash lookups
    • Install monomorphic method caching if that doesn’t work out
    • Resolve type checks, possibly eliminating entire branches of code
    • Re-write attribute binds into “unsafe” pointer operations
    • Eliminate dead code
  • codegen takes the optimized graph and produces bytecode from it again. However, in the future (if we’re lucky, then hopefully through a GSoC project), this is where we would produce machine code instead.
  • deopt deals with the situation where some invariant specialized code may be relying on gets broken, and yet that specialized code is still on the call stack. It walks the call stack, looking for specialized code on it and tweaking return addresses and other data structures so that when we return into the code, we’re back in the safe (though of course slower) unspecialized code that checks things as needed.

By and large, this is essentially giving MoarVM a JIT. Of course, it’s not producing machine code yet, but rather JITing back to improved bytecode. While we tend to think of JITs primarily as “turn the program into machine code”, that’s really just one small part of any modern JIT. Analysis and specialization of the program before the machine code is produced is just as important; with this approach, we get to focus in on that part first and get some of the benefits now.

Concurrency

Progress on Perl 6 concurrency continues. The JVM implementation of the concurrency features has had various performance improvements since the March release, and MoarVM now has support for most of the Perl 6 concurrency features also. However, the MoarVM support for these features is early and most certainly not yet production ready. We’ll include it in the April release, but stick with Rakudo on the JVM for concurrency things if you’re doing anything that matters. If you just want to play with the basics, either will do.

Rakudo on MoarVM takes the spectest crown

Rakudo on its various backends hold the top spots on the Perl 6 specification test suite pass rate. However, nowadays Rakudo on MoarVM has worked its way into the lead. How so? Because it has the best Unicode database introspection support, opening up a range of tests that no other backend handles yet. Additionally, because it gets some of the Unicode stuff right that that Parrot does, but JVM doesn’t. And, finally, because on top of that it can now pass a bunch of the concurrency tests.

A multi-backend Rakudo Star

I’d hoped we would get a Rakudo Star release with support for all three backends out in March. It didn’t happen; the module tests showed up some holes. We’ve by now largely fixed those for Rakudo on MoarVM, and we’re looking quite good for the April Rakudo Star coming with support for both Parrot and MoarVM. With some effort, I’m optimistic we’ll have a JVM Star release in good shape for April too. This will provide users who want the “batteries included” release a choice of backends, and of note give those using Parrot a chance to switch over to using MoarVM, getting some substantial performance improvements on most programs and lower startup time and memory use.

Where next?

In the coming months, I’m going to be focusing on the following areas:

  • More improvements to the Rakudo and NQP optimizers, so we generate better (faster, smaller) code.
  • More improvements to the Rakudo built-ins, so they operate more efficiently.
  • Making the MoarVM concurrency support more robust, and improving the parallel GC.
  • Advancing the state of asynchronous I/O, so we’ll have good support for it on both the JVM and MoarVM.
  • Teaching spesh to specialize better. There are a bunch of data access things that can be made cheaper, as well as being able to better optimize multiple dispatch. Beyond that, both inlining and escape analysis are on the menu.
  • Improving native type support, including providing native arrays.

I’ve been hidden away coding for much of the year so far, apart from putting in an appearance at FOSDEM. But I’m getting on the road soon! I’ll be at the Dutch, Polish and Czech Perl Workshops, and look forward to seeing folks and sharing what I’ve been working on. Hope to see some of you out there!


Jonathan Worthington | 6guts | 2014-04-12 00:21:53

Announce: Rakudo Star Release 2014.03

A useful, usable, “early adopter” distribution of Perl 6

On behalf of the Rakudo and Perl 6 development teams, I’m happy to
announce the March 2014 release of “Rakudo Star”, a useful and usable
distribution of Perl 6. The tarball for the March 2014 release is
available from http://rakudo.org/downloads/star/. A Windows .MSI
version of Rakudo star is also available at that location.

In the Perl 6 world, we make a distinction between the language
(“Perl 6″) and specific implementations of the language such as
“Rakudo Perl”. This Star release includes [release 2014.03] of the
[Rakudo Perl 6 compiler], version 6.1.0 of the [Parrot Virtual
Machine], plus various modules, documentation, and other resources
collected from the Perl 6 community.

[release 2014.03]:
https://github.com/rakudo/rakudo/blob/nom/docs/announce/2014.03.md
[Rakudo Perl 6 compiler]: http://github.com/rakudo/rakudo
[Parrot Virtual Machine]: http://parrot.org

Some of the new features added to this release include:

  • The core of Rakudo::Debugger is now part of Rakudo itself and works across all backends.
  • “make” no longer itemizes its arguments.
  • for-loops at the statementlist level are now sunk by default.
  • better parsing of unspaces and formatting codes inside Pod blocks.
  • Fix for for-loops to be properly lazy
  • Numerous Pod parsing and formatting improvements
  • @ as shortcut for @$, % as shortcut for %$
  • list infix reductions no longer flatten
  • Numerous compiler suggestion improvements

Please note that this release of Rakudo Star does not support the JVM
nor the MoarVM backends from the Rakudo compiler. While the other backends
mostly implement the same features as the Parrot backend, some bits are
still missing that lead to module build problems or test failures.
We hope to provide experimental JVM-based and MoarVM-based Rakudo Star
releases in April 2014.

There are some key features of Perl 6 that Rakudo Star does not yet
handle appropriately, although they will appear in upcoming releases.
Some of the not-quite-there features include:

  • advanced macros
  • threads and concurrency (in work for the JVM and MoarVM backend)
  • Unicode strings at levels other than codepoints
  • interactive readline that understands Unicode
  • non-blocking I/O
  • much of Synopsis 9 and 11

There is an online resource at http://perl6.org/compilers/features
that lists the known implemented and missing features of Rakudo and
other Perl 6 implementations.

In many places we’ve tried to make Rakudo smart enough to inform the
programmer that a given feature isn’t implemented, but there are many
that we’ve missed. Bug reports about missing and broken features are
welcomed at rakudobug@perl.org.

See http://perl6.org/ for links to much more information about
Perl 6, including documentation, example code, tutorials, reference
materials, specification documents, and other supporting resources. A
draft of a Perl 6 book is available as docs/UsingPerl6-draft.pdf in
the release tarball.

The development team thanks all of the contributors and sponsors for
making Rakudo Star possible. If you would like to contribute, see
http://rakudo.org/how-to-help, ask on the perl6-compiler@perl.org mailing list, or join us on IRC #perl6 on freenode.

rakudo.org | rakudo.org | 2014-04-01 17:48:38

Image

 

I got into programming because I wanted to write games. I played games as a kid (wolfenstein 3d, putt-putt, I don’t remember the rest of the names), and I thought “when I grow up, I’m going to write games!”

I never really did; at some point I realized I’ve written more compilers than games: whatever happened to the childhood dream? So I thought I’ll write some, to try and learn something new.

With two friends from work I went to javascript game programming conference – it was the only game conf I’ve ever heard of, so I thought “javascript or not, let’s see how gamedev geeks party”. The universal “let’s create idiotic games and make a lot of money on ads and In-App-Purchases” attitude of the startup crowd discouraged me a little bit, but I tried to ignore that bit and focus on the technical content. I never liked javascript, I didn’t really want to use it for any kind of programming, and frankly, working in Perl and Python I grew tired of dynamic typing altogether, but I thought “hmm, maybe if I created a superset of JS that has nice type annotations, that the compiler checks and then drops, emitting vanilla JS, that wouldn’t be so bad to write code in”. I consulted a friend of mine, and, as it usually happens, it turned out that such thing already exists: it’s called Typescript, and Microsoft created it long ago. Oh well, let’s give it a try.

Why am I writing about all this? Where does Perl 6 come in? Thing is, when I started programming in Typescript, I got annoyed. It’s severly underdocumented, undersupported, development is not pleasant, because you get some errors from the compiler and then different errors from a browser, but the worst thing is: it was slow! It was so slow it was unbearable, and I thought “ah, screw it. I’ll be better off with Perl 6″.

I chose Perl 6 for performance reasons! Ain’t that something to tell my grandkids about.

Of course, creating games in Perl 6 is not so trivial: I’ll have to write the engine/framework/whatever myself. Time to roll the sleeves up and get to work.

I got quite motivated by http://lessmilk.com. This guy creates a new game every week to learn game development. Cool thing! He was describing phaser.js in one of his articles, so I created Steroids, and modelled it after Phaser.

Why steroids? Well, at some point I ported my C Asteroids to Perl 6, as a proof of concept, to see if it can indeed handle 60fps games (it can), and the “steroids” bit somewhat got stuck in my mind. Also, being on steroids gives you much more flexibility than, say, being on the rails. Don’t worry, nothing bad about being on Steroids: Just ask Duke Nukem, he got by just fine.

So, Perl 6 on Steroids was born. I started writing a running-jumping game, and abstracting the commonly used bits to a module as I went on. Why a running-jumping game? Well, you asked for it: it’s time for another backstory:

Ever wrote in a backseat of the car as a kid, looking out the window? Did you imagine a person running along the car, jumping over obstacles? I did, and from what I’ve heard I am not the only one. Thus, “Through the Window” was born: a game where a man runs along the horizon, jumping over trees and cows, being the first showcase for Steroids, and a reason for it to exist.

The post is getting lenghty enough, and there’s much to announce still, so I’ll run through the 80 lines of code really quickly to show you what Steroids gives you. You can read the entire source code here

class Game is Steroids::Game

You create a class that inherits from Steroids::Game. You need to define at least two methods for it to make any sense: create() and update(). The former initializes the game, and the latter is called 60 times per second to update the game state.

Some of the things you may want to do in the create() method:

self.load_bitmap(‘background’, ‘assets/background.bmp’);

self.add_sprite(‘background’, 0, 0);

Pretty self-explanatory. Steroids handles loading bitmaps from disk for you, and putting them in a scene.

self.load_bitmap(‘runner’, ‘assets/runner.bmp’);

$!runner = self.add_sprite(‘runner’, 50, GROUNDLEVEL);

$!runner.y -= $!runner.h;

$!runner.gravity = 1;
$!runner.when({ $_.y > GROUNDLEVEL – $_.h }, {
$!runner.y = GROUNDLEVEL – $!runner.h;
    $!runner.velocity[1] = 0;
})

;

Plenty of things here: we add a runner to the scene. We move him up a little bit, so he’s actually standing on the ground, rather than having his head on the ground level, we give him a gravity (so he’s falling down all the time), and we add an event, so when he hits the ground with his feet we stop him, so he doesn’t fall any further down. This should probably be handled by a collision detection at some point, but I didn’t get around to write collision detection yet.
 
That’s the interesting part from create(), now let’s look at update() really quickly.
 
method update {
        if @!obstacles
        and @!obstacles[0].x < ($!runner.x + $!runner.w) < (@!obstacles[0].x + @!obstacles[0].w)
        and $!runner.y + $!runner.h > @!obstacles[0].y {
            say “===========================================”;
            say ” Game over! You traveled $!distance pixels “;
            say “===========================================”;
            say “”;
            self.quit;
        }
Here’s our half-assed collision detection: if the front foot of the runner is inside the first obstacle (so, the leftmost), then the game is over.
 
How do the obstacles get there in the first place?
 if $!distance > $!last-obstacle + 40 and rand > 0.9 {
     my $s = self.add_sprite(<cow tree>.pick, self.width, GROUNDLEVEL);
     $s.y -= $s.h;
     $s.velocity[0] = -12;
     @!obstacles.push: $s;
     $s.when({ $_.x < 0 – $_.w }, {
     self.remove_sprite($_);
         @!obstacles.shift;
     });
     $!last-obstacle = $!distance;
}
If enough time has passed since we put an obstacle on the road (we don’t want the road to be impossible to travel), we add either a tree or a cow on the ground, as far on the right as we can. We make it slowly move to the left, add it to the list of obstacles, and add an event so when it reaches the left edge of the screen we remove it from the scene.
This part features a hack: @!obstacles.shift removes the first element of the list, and it just so happens that the object disappearing from the scene will always be the first one on the list: we don’t need to look through @!obstacles to find which one it is.
 
if self.is_pressed(“Space”) and $!runner.y == GROUNDLEVEL – $!runner.h {
     $!runner.velocity[1] = -22;
}
Pretty obvious: if Space is pressed while the runner is on the ground, he bounces upwards.
 
That’s just about all that it’s there to describe. Go play around with it, and remember about the soundtrack (in the README) – it’s a very important part of the game :)
 
(I was informed that the build process can be a bit more complicated on OSX; the entire Steroids development team is working hard to fix it, but if you have a good and ready solution, please send me a pull request).
 
But wait, there’s more! To celebrate the best game I’ve ever written, I’m announcing a contest: The task is to write a game using Steroids (with as many patches to it as you want). Let’s see how much can we squeeze out of those 120 lines of code to create something fun. One week from now, next sunday, I’m going to pick a winner and reward the author with a game that I like, and the author doesn’t yet have. Have an inappropriate amount of fun!

Tadeusz Sosnierz (tadzik) | Whatever but Cool Perl | 2014-03-23 14:36:45

"How can I parse indented text with a grammar?" has turned into a frequently-asked question recently. People want to parse Python and CoffeScript.

My fix is double. First, here's Text::Indented, a module that does it.

Secondly, I'll now recreate my steps in creating this module. Each section will have a description of what needs to be done, a failing test, and then the appropriate implementation code to pass the test.

Quite a simple indent

We want to be able to handle indentation at all.

    my $input = q:to/EOF/;
    Level 1
        Level 2
    EOF

    parses_correctly($input, 'single indent');

Well, that's easy. This grammar will do that:

regex TOP { .* }

(Kent Beck told me I can cheat, so I cheat!)

Too much indent for our own good

But there are some indent jumps that we're not allowed to make. Anything that indents more than one step at a time, basically. Let's check for that.

    my $input = q:to/EOF/;
    Level 1
            Level 3!
    EOF

    fails_with($input, Text::Indented::TooMuchIndent);

This takes a little more code to fix. We declare an exception, start parsing lines, and separate each line into indent, extra whitespace, and the rest of the line. Finally we check the line's indent against the current indent — mediated by the contextual variable @*SUITES. You'll see where I'm going with this in a minute.

class TooMuchIndent is Exception {}

constant TABSTOP = 4;

regex TOP {
    :my @*SUITES = "root";

    <line>*
}

sub indent { @*SUITES.end }

regex line {
    ^^ (<{ "\\x20" x TABSTOP }>*) (\h*) (\N*) $$ \n?

    {
        my $new_indent = $0.chars div TABSTOP;

        die TooMuchIndent.new
            if $new_indent > indent() + 1;
    }
}

(The <{ "\\x20" x TABSTOP }> is a bit of a workaround. In Wonderful Perl 6 we would be able to write just [\x20 ** {TABSTOP}].)

Actual content

Having laid the groundworks, let's get our hands dirty. We want the content to end up, line by line, on the right scoping level.

    my $input = q:to/EOF/;
    Level 1
        Level 2
    EOF

    my $root = parse($input);

    isa_ok $root, Text::Indented::Suite;
    is $root.items.elems, 2, 'two things were parsed:';
    isa_ok $root.items[0], Str, 'a string';
    isa_ok $root.items[1], Text::Indented::Suite, 'and a suite';

We need a Suite (term borrowed from Python) to contain the indented lines:

class Suite {
    has @.items;
}

This requires a slight amending of TOP:

regex TOP {
    :my @*SUITES = Suite.new;

    <line>*

    { make root_suite }
}

The logic in line to create new suites with new indents:

# ^^ (<{ "\\x20" x TABSTOP }>*) (\h*) (\N*) $$ \n?

my $line = ~$2;

if $new_indent > indent() {
    my $new_suite = Suite.new;
    add_to_current_suite($new_suite);
    increase_indent($new_suite);
}

add_to_current_suite($line);

For all this, I had to define some convenience routines:

sub root_suite { @*SUITES[0] }
sub current_suite { @*SUITES[indent] }
sub add_to_current_suite($item) { current_suite.items.push($item) }
sub increase_indent($new_suite) { @*SUITES.push($new_suite) }

But what about de-indenting?

We've handled indenting and creating new suites nicely, but what about de-indenting?

    my $input = q:to/EOF/;
    Level 1
        Level 2
    Level 1 again
    EOF

    my $root = parse($input);

    is $root.items.elems, 3, 'three things were parsed:';
    isa_ok $root.items[0], Str, 'a string';
    isa_ok $root.items[1], Text::Indented::Suite, 'a suite';
    isa_ok $root.items[2], Str, 'and a string';

Easily fixed with an elsif case in our line regex:

elsif $new_indent < indent() {
     decrease_indent;
}

And a convenience routine:

sub decrease_indent { pop @*SUITES }

Hah, you missed multi-step de-indents!

Indenting multiple steps at a time isn't allowed... but de-indenting multiple steps is. (This may actually be the strongest point of this kind of syntax. It corresponds to the } } } or end end end case of languages with explicit block delimiters, and is arguably neater.)

    my $input = q:to/EOF/;
    Level 1
        Level 2
            Level 3
            Level 3
    Level 1 again
    EOF

    my $root = parse($input);

    is $root.items.elems, 3, 'three things on the top level';
    is $root.items[1].items[1].items.elems, 2, 'two lines on indent level 3';

Oh, but we only need to change one line in the implementation to support this:

decrease_indent until indent() == $new_indent;

And a half!

Now for some random sins. You're not supposed to indent partially, a non-multiple of the indent size.

    my $input = q:to/EOF/;
    Level 1
          Level 2 and a half!
    EOF

    fails_with($input, Text::Indented::PartialIndent);

So we introduce a new exception.

class PartialIndent is Exception {}

And a condition that checks for this:

# ^^ (<{ "\\x20" x TABSTOP }>*) (\h*) (\N*) $$ \n?

my $partial_indent = ~$1;

die PartialIndent.new
    if $partial_indent;

What do you mean, "jumped the gun"?

Secondly, you're not meant to indent the first line; it has to be at indentation level 0.

    my $input = q:to/EOF/;
        Level 2 already on the first line!
    EOF

    fails_with($input, Text::Indented::InitialIndent);

We introduce another exception for that.

class InitialIndent is Exception {}

And a condition that matches our test case.

die InitialIndent.new
    if !root_suite.items && $new_indent > 0;

The importance of handles

As a final clean-up refactor, let's change @.items in Suite to this:

class Suite {
    has @.items handles <push at_pos Numeric Bool>;
}

It makes Suite more Array-like. Piece by piece:

  • push allows us to push directly into a Suite object, instead of into its .items attribute.
  • at_pos allows us to index Suites directly. Things like $root.items[1] in the tests turn into $root[1].
  • Numeric gets rid of the .elems calls for us in the tests, and we can write $root.items.elems as just +$root instead.
  • Finally, Bool allows us to write !root_suite.items as just !root_suite().

Somehow I liked doing this refactor last, after all the dust around the implementation had settled. It makes the API much more enjoyable to use, and hides a bunch of unnecessary steps along the way. I really like the way handles saves a bunch of boring code.

Enjoy

Anyway, that's parsing of indented code. Not as tricky as I thought.

Now I fear I've damned myself to contribute this solution to arnsholt++'s budding py3k implementation. 哈哈

Carl Masak | Strangely Consistent | 2014-03-23 01:12:03

Beside him, Melvin and Lavender and Allen all seemed to feel like marching too.
And Neville softly began to sing the Song of Chaos.

The tune was what a Muggle would have identified as John Williams's Imperial
March, also known as "Darth Vader's Theme"; and the words Harry had added were
easy to remember.

Doom doom doom
Doom doom doom doom doom doom
Doom doom doom
Doom doom doom doom doom doom
DOOM doom _DOOM_
Doom doom doom-doom-doom doom doom
Doom doom-doom-doom doom doom
Doom doom doom, doom doom doom.

By the second line the others had joined in, and soon you could hear
the same soft chant coming from nearby parts of the forest.

And Neville marched alongside his fellow Chaos Legionnaires,
strange feelings stirring in his heart,
imagination becoming reality,
as from his lips poured a fearful song of doom.

-- Harry Potter and the Methods of Rationality
http://hpmor.com/ chapter/30

On behalf of the Parrot team, I'm proud to announce Parrot 6.2.0, also known
as "Imperial Amazon". Parrot (http://parrot.org/) is a virtual machine aimed
at running all dynamic languages.

Parrot 6.2.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/devel/6.2.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.2.0 News:
- Core
+ Re-enable old immc flags for parrot and parrot_old, such as
-On -a -v -y -E -dxx. [GH #1033]
+ Fixed imcc -O1 and -O2
-O1 fixes:
= Special-case get_global for branch_cond_loop_swap, which broke
NCI tests [GH #1037]
= set_addr label does mark a basic_block, dead_code_remove() needs
the label. Fixes nqp [GH #1061].
-O2 used_once fixes:
= Allow used_once elimination only for purely functional ops
without side-effects [GH #1036]
= Empty ins->next in deletion [GH #1042].
-O2 constant propagation fixes:
= Empty ins->op ptrs when deleting or skipping certain instruction
[GH #1039],
= Wrong logic leading to missed detecting writes from get_results
[GH #1041],
= Propagate only matching types in setters [GH #1042],
= Stop at yield or invokecc for possible push_eh/pop_eh non-local
effects [GH #1044]
+ Fixed TT #1930, a floating point optimizer problem on PPC
+ Added cache iterators in encoding_find_*cclass [GH #1027]
to speedup the utf8 pattern "scan a whole string for newlines".
- Build
+ Set emacs buffer-read-only:t tags in generated files [GH #1034]
+ Provide coda for generated include/*.pasm files [GH #1032]
+ Fix parser for bison 3 [GH #1031]
+ Add support for __builtin_expect LIKELY/UNLIKELY branch optimizations
in a new auto::expect step. [GH #1047]
- Deprecations
+ Warn on deprecated usage of append_configure_log()
- Documentation
+ Updated pod for parrot and tools/build/c2str.pl
- Tests
+ Added -O1 and -O2 to fulltest
- Community
+ Parrot has been accepted to Google Summer of Code 2014
+ Got a candidate for "Improve performance of method signatures"


The SHA256 message digests for the downloadable tarballs are:
a4c97e5974cf6e6ee1e34317aafd2d87a3bd63730098a050d4f09802b13da814 parrot-6.2.0.tar.gz
f8b9cd2d558a1517038dc3154343f622ab1fd7b1f1d13f41a5c6dd51425bfe8e parrot-6.2.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Our next scheduled release is 15 Apr 2014.

Enjoy!

Perl 6 Announce | perl.perl6.announce | 2014-03-21 17:36:08

On behalf of the Parrot team, I'm proud to announce Parrot 6.1.0, also
known as "Black-collared Lovebird". Parrot (http://parrot.org/) is a
virtual machine aimed at running all dynamic languages.

Parrot 6.1.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/devel/6.1.0/), or by
following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.1.0 News:
- Build
+ Improve auto::libffi probe with -fstack-protector-all or
-fstack-protector-strong with recent GCC and OpenBSD's toolchains
- Documentation
+ Replace perldoc by a new podextract.pl [GH #1028, #973,
#520], which fixes problems with 'sudo make install' generated ops
pods as root.

Warnings:
- Latest nqp does not support the new packfile API yet,
replacing EvalPMC.
- This release of Parrot fails to build out-of-the-box under Bison 3,
e.g. on Fedora 20. For workarounds
see https://github.com/parrot/parrot/issues/1031

The SHA256 message digests for the downloadable tarballs are:
87d25119c73acdb26f89ac4c68d73f3d996451ada51f3cb2cd4878b6f0e0a34c
parrot-6.1.0.tar.gz
bb1294ad2a7d5b3c4688fc736fb775e94ecfe35fdc072a2631c2080eb5f366f7
parrot-6.1.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Esp. cPanel for the time and Erin Schoenhals for
sponsoring her old Macbook G4 powerpc to update the native_pbc's.
I've also updated all the missing old documentations on parrot.github.io
Our next scheduled release is 18 Mar 2014.

Enjoy!

--
Reini Urban
http://cpanel.net/ http://www.perl-compiler.org/

Perl 6 Announce | perl.perl6.announce | 2014-02-18 22:48:26

On behalf of the Parrot team, I'm proud to announce Parrot 6.1.0, also known
as "Black-collared Lovebird". Parrot (http://parrot.org/) is a
virtual machine aimed
at running all dynamic languages.

Parrot 6.1.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/devel/6.1.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.1.0 News:
- Build
+ Improve auto::libffi probe with -fstack-protector-all or
-fstack-protector-strong with recent GCC and OpenBSD's toolchains
- Documentation
+ Replace perldoc by a new podextract.pl [GH #1028, #973,
#520], which fixes
problems with 'sudo make install' generated ops pods as root.

Warnings:
- Latest nqp does not support the new packfile API yet,
replacing EvalPMC.
- This release of Parrot fails to build out-of-the-box under Bison 3,
e.g. on Fedora 20. For workarounds see
https://github.com/parrot/parrot/issues/1031

The SHA256 message digests for the downloadable tarballs are:
87d25119c73acdb26f89ac4c68d73f3d996451ada51f3cb2cd4878b6f0e0a34c
parrot-6.1.0.tar.gz
bb1294ad2a7d5b3c4688fc736fb775e94ecfe35fdc072a2631c2080eb5f366f7
parrot-6.1.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Esp. cPanel for the time and Erin Schoenhals for
sponsoring her old Macbook G4 powerpc to update the native_pbc's.
I've also updated all the missing old documentations on parrot.github.io
Our next scheduled release is 18 Mar 2014.

Enjoy!

Perl 6 Announce | perl.perl6.announce | 2014-02-18 21:56:35

Announce: Rakudo Star Release 2014.01

A useful, usable, “early adopter” distribution of Perl 6

On behalf of the Rakudo and Perl 6 development teams, I’m happy to
announce the January 2014 release of “Rakudo Star”, a useful and usable
distribution of Perl 6. The tarball for the January 2014 release is
available from http://rakudo.org/downloads/star/. A Windows .MSI
version of Rakudo star is available in the downloads area as well.

In the Perl 6 world, we make a distinction between the language
(“Perl 6″) and specific implementations of the language such as
“Rakudo Perl”. This Star release includes release 2014.01 of the
Rakudo Perl 6 compiler, version 5.9.0 of the Parrot Virtual
Machine
, plus various modules, documentation, and other resources
collected from the Perl 6 community.

https://github.com/rakudo/rakudo/blob/nom/docs/announce/2014.01.md

Some of the new features added to this release include:

  • The eval sub and method are now spelled EVAL
  • Numeric.narrow to coerce to narrowest type possible
  • Can now supply blocks with multiple arguments as sequence endpoints
  • Method calls and hash/list access on Nil give Nil

This release also contains a range of bug fixes, improvements to error
reporting and better failure modes.

Please note that this release of Rakudo Star does not support the JVM
nor the MoarVM backends from the Rakudo compiler. While the other backends
mostly implement the same features as the Parrot backend, many bits are
still missing, most prominently the native call interface.
We hope to provide a JVM-based and MoarVM-based Rakudo Star releases soon.

There are some key features of Perl 6 that Rakudo Star does not yet
handle appropriately, although they will appear in upcoming releases.
Some of the not-quite-there features include:

  • advanced macros
  • threads and concurrency (in work for the JVM backend)
  • Unicode strings at levels other than codepoints
  • interactive readline that understands Unicode
  • non-blocking I/O
  • much of Synopsis 9 and 11

There is an online resource at http://perl6.org/compilers/features
that lists the known implemented and missing features of Rakudo and
other Perl 6 implementations.

In many places we’ve tried to make Rakudo smart enough to inform the
programmer that a given feature isn’t implemented, but there are many
that we’ve missed. Bug reports about missing and broken features are
welcomed at rakudobug@perl.org.

See http://perl6.org/ for links to much more information about
Perl 6, including documentation, example code, tutorials, reference
materials, specification documents, and other supporting resources. A
draft of a Perl 6 book is available as docs/UsingPerl6-draft.pdf in
the release tarball.

The development team thanks all of the contributors and sponsors for
making Rakudo Star possible. If you would like to contribute, see
http://rakudo.org/how-to-help, ask on the perl6-compiler@perl.org
mailing list, or join us on IRC #perl6 on freenode.

rakudo.org | rakudo.org | 2014-01-31 16:58:10

This month’s Rakudo compiler was cut today, and there’s a bunch of good stuff in there. In this post I’ll take a quick look at what’s been done.

MoarVM Support

This is the first Rakudo compiler release to have support for building and running on MoarVM, a new VM being built especially for Perl 6 and NQP (the Perl 6 subset a sizable chunk of Rakudo is written in). MoarVM support is not quite up to the level of the JVM and Parrot backends yet. It passes less specification tests than either of them – though it’s getting awfully close (Rakudo on MoarVM passes over 99% of the specification tests that Rakudo on the JVM – the current leader – does). Thus, you can actually run a heck of a lot of Perl 6 code just fine on it already. I used it recently in a pair programming session and we only hit one bug in the couple of hours we were using it.

The fast-path for signature binding that I mentioned in my previous post has also been put in place. It did, as hoped, lead to a fairly dramatic speedup. The workload of building Rakudo’s built-ins and running the specification test suite was also a good basis for doing some GC tuning, which led to further improvements. By this point, on my box, Rakudo on MoarVM now has:

  • The lowest startup time of any Rakudo backend
  • The shortest spectest time of any Rakudo backend
  • For the CORE.setting build and spectests, the smallest memory footprint of any Rakudo backend

Other Rakudo developers have reported similar findings. I need more time to look into the exact numbers, but it would appear that Rakudo on MoarVM is also the fastest to build. CORE.setting build time is roughly competitive with on the JVM now (but how roughly seems to vary quite widely – I think it depends on what JVM or even version is being used), but startup time for NQP on MoarVM is rather lower, meaning that those parts of the build go by faster.

The focus for the next month or two will be getting into a position where we can produce a Rakudo Star release that uses MoarVM. This means digging through the last 1% of failing spectests and dealing with them, finishing the work of getting Panda (our module installer) to work with Rakudo on MoarVM, and then hunting bugs that keep us from running the modules. Getting NativeCall working will also be a notable task, although given we already have a NativeCall in C working against 6model (the one we built for Parrot), there is a lot of prior art this time – unlike on the JVM.

On performance – we’re not even scratching the surface of what’s possible. MoarVM’s design means it has a lot of information to hand to do a good amount of runtime specialization and optimization, but none of this is implemented yet. I aim to have a first cut of it in place within the next few months. Once we have this analysis and specialization framework in place, we can start thinking about things such as JIT compilation.

Rakudo on JVM Improvements

Probably the best news in this release for anybody working with Rakudo on JVM is that the gather/take stack overflow bug is now fixed. It was a fun one involving continuations and a lack of tailcall semantics in an important place, but with doing the MoarVM implementation of continuations in my recent past, I was in a good place to hunt it down and get a fix in. A few other pesky issues are resolved, including a regex/closure interaction issue and sometimes sub-optimal line number reporting.

The other really big piece of JVM-specific progress this month has been arnsholt++ continuing to work on the plumbing to get us towards full NativeCall support for JVM. This month, a number of the big missing pieces landed. NativeCall working, and the modules that depend on it working, is the last big blocker for a Rakudo Star on JVM release, and it’s now looking quite possible that we’ll see that happen in the February one.

General Rakudo Improvements

While a lot of energy went on the things already mentioned, we did get some nice things in place that are independent of any of the particular backend: improvements to the Nil type, the sequence operator, sets and bags, adverb syntax parsing, regex syntax errors, aliased captures in regexes, and numerics. MoarVM’s rather stricter interpretation of closure semantics than we’ve had in place on other backends has also led to various code-gen fixes, which may lead to better performance in certain scenarios across the board too (one of those, “I know it probably should but I didn’t benchmark” situations).

I’d like to take a moment to thank everyone who contributed to this month’s release. This month had the highest Rakudo contributor count in a good while – and I’m hopeful we can maintain and exceed it in the months to come.


Jonathan Worthington | 6guts | 2014-01-24 01:55:47

May your pleasures be many, your troubles be few.
-- Cast of "Hee Haw"

On behalf of the Parrot team, I'm proud to announce Parrot 6.0.0, also known
as "Red-necked Amazon". Parrot (http://parrot.org/) is a virtual machine aimed
at running all dynamic languages.

Parrot 6.0.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/supported/6.0.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

Parrot 6.0.0 News:
- Core
- Build
- Documentation
+ Fixed bad IPv6 examples in pdd22_io, thanks to Zefram++ [GH#1005]
- Tests
+ Fixed failure in t/configure/062-sha1.t.
+ Updated to Unicode 6.3 (libicu5.2): U+180e Mongolian Vowel Separator
is no whitespace anymore [GH #1015]
- Community


The SHA256 message digests for the downloadable tarballs are:
e150d4c5a3f12ae9d300f019bf03cca58d8e8051dd0b934222b4e4c91160cd54 parrot-6.0.0.tar.gz
6cb9223ee389a36588acf76ad8ac85e2224544468617412b1d7902e5eb8bd39b parrot-6.0.0.tar.bz2

Many thanks to all our contributors for making this possible, and our sponsors
for supporting this project. Our next scheduled release is 18 Feb 2014.

Enjoy!

Perl 6 Announce | perl.perl6.announce | 2014-01-22 17:38:28