01 Nov 2011

feedPlanet Parrot

parrot.org: Parrot for Dummies (which means me! :-)

This is just a quick, first blog post to ensure everything is working correctly.

Cheers!

Alvis

01 Nov 2011 9:26pm GMT

24 Oct 2011

feedPlanet Parrot

Tadeusz Sośnierz (tadzik): MuEvent: AnyEvent lookalike for Perl 6

Trying to struggle with Select in Parrot, I accidentally discovered that its Socket has a .poll method. What a trivial, yet satisfying way to have some simple non-blocking IO. Thus, MuEvent was born.

Why MuEvent? Well, in Perl 6, Mu can do much less than Any. MuEvent, as expected, can do much less than AnyEvent, but it's trying to keep the interface similar.

You're welcome to read the code, and criticise it all the way. Keep in mind that I can no idea how should I properly write an event loop, so bonus points if you tell me what could have been done better. I don't expect MuEvent to be an ultimate solution for event-driven programming in Perl 6, but I hope it will encourage people to play around. Have an appropriate amount of fun!


24 Oct 2011 9:06pm GMT

19 Oct 2011

feedPlanet Parrot

parrot.org: Parrot 3.9.0 "Archaeopteryx" Released

On behalf of the Parrot team, I'm proud to announce Parrot 3.9.0 "Archaeopteryx".
Parrot (http://parrot.org/) is a virtual machine aimed at running all dynamic languages.

Parrot 3.9.0 is available on Parrot's FTP site
(ftp://ftp.parrot.org/pub/parrot/releases/supported/3.9.0/), or by following the
download instructions at http://parrot.org/download. For those who would like
to develop on Parrot, or help develop Parrot itself, we recommend using Git to
retrieve the source code to get the latest and best Parrot code.

read more

19 Oct 2011 2:08am GMT

05 Oct 2011

feedPlanet Parrot

parrot.org: PACT - Design Notes

TL;DR: https://github.com/parrot/PACT

So after my last blog post, I started a gist to keep track of "how would I write PCT". I called it PACT, the Parrot Alternate Compiler Toolkit. I suppose I could have called it PCT2, but I really don't want to try to claim it will 100% replace PCT. PCT's very valuable to the people using it right now, but there's no small desire to add to it and I'd like to help it be better. Parrot's main audience, to my mind, is prospective compiler writers and the easier we can make their lives the better.

read more

05 Oct 2011 9:14pm GMT

01 Oct 2011

feedPlanet Parrot

Andrew Whitworth: Jaesop Runtime Additions

I haven't had a lot of time for hacking recently, but I did have a little bit of time this weekend. I decided that Jaesop needed a little bit of love, so I went ahead and provided it.

The first thing I did this morning was to add PCRE bindings to the stage0 runtime. Now, if you build Parrot with PCRE support, you can write this kind of stuff in JavaScript:

var regexp = new RegExp("ba{3}", "");
regexp.test("baaa");    // 1
regexp.test("caaa");    // 0
regexp.test("baa");     // 0

regexp = /ba{3}/;       // Same.

Support isn't perfect yet, but it's enough of a start to get us moving with other things. Specifically, I don't have modifiers like g and i working yet, but once I figure out the way to tell PCRE to do what I want it shouldn't be too hard to add.

If you don't build Parrot with PCRE support, the RegExp object won't be available, and I think it's going to spit out some ugly warning messages. Considering this is just an ugly bootstrapping stage and it's not complete yet, I don't mind making these kinds of things optional.

I also added in the beginnings of a runtime. Now there are some basic objects like Process which you can use to interact with the environment, and FileStream which you can use for input and output. The process variable is always available as a global, and it gives access to the standard streams and command-line arguments. Here are some examples:

process.stdout.writeLine("Hello World!");
process.stdout.writeLine(process.argv[0]);
var s = process.stdin.readLine();

And so, after several weeks of development, you can finally write a simple "Hello World" program in Jaesop.

What my hacking today has shown me is that the one thing I am severely lacking on are the tests. I do have some tests but I don't have nearly enough. Specifically, I've learned that my test coverage of Arrays is severely inadequate. I also need to test a few other details which I found out were horribly broken when I went to play with them today.

Despite some of the setbacks, Jaesop stage 0 compiler is progressing nicely and it's getting to the point when we can start to do some real work with it. These few runtime additions, though small, have greatly improved the situation. There are a few things I still need to do to it, besides the testing I mentioned above: I need to improve the PCRE bindings because right now they are very basic. I need to add methods to Object, Array, and String for usability. I also need to add in a mechanism like node.js' require routine to load modules, and maybe a few other similar code management details. When that stuff is all done, and when Parrot has proper 6model support, we can start moving forward on the stage1 compiler. I'm really starting to get excited about taking that next step.

01 Oct 2011 7:00am GMT

21 Sep 2011

feedPlanet Parrot

parrot.org: Parrot 3.8.0 "Magrathea" Released

On behalf of the Parrot team, I'm proud to announce Parrot 3.8.0, also known as "Magrathea". Parrot (http://parrot.org/) is a virtual machine aimed at running all dynamic languages.

Parrot 3.8.0 is available on Parrot's FTP site (ftp://ftp.parrot.org/pub/parrot/releases/devel/3.8.0/), or by following the download instructions at http://parrot.org/download. For those who would like to develop on Parrot, or help develop Parrot itself, we recommend using Git to retrieve the source code to get the latest and best Parrot code.

read more

21 Sep 2011 3:35am GMT

13 Sep 2011

feedPlanet Parrot

Andrew Whitworth: Rosella Harness and Query Improvements

In my work on Jaesop, I realized that some parts of the Rosella Harness library were a little bit more messy than I would like. I decided to take some time and get that library raised up to a better level of quality. To make some cleanups, I used the Query and FileSystem libraries for certain tasks, which turned out to be a great move, because I identified nifty new features that were needed in those libraries as well.

Query Streams

The first version of the Query library functionality was very straight forward. It basically provided the implementations of some higher-order functions and method semantics that allowed calls to be chained together. Here is a quick example:

var result = Rosella.Query.as_queryable([1, 2, 3, 4])
                .filter(function(i) { return i % 2 == 0; })
                .map(function(i) { return i * 2; })
                .fold(function(s, i) { return s + i; }, 0)
                .data();

That example, clearly contrived, takes an array of numbers. It filters out the odd numbers, then multiplies everything else by 2 and sums them together. It's simple and straight-forward: The filter takes an input array and generates an output array of values which meet the requirements. The map routine takes an input array and produces an output array. The fold routine takes an input array and outputs a single integer number. If each method output its name to the console when it was invoked, we would see something like this:

filter filter filter filter
map map
fold fold
data

We do all the filtering first, then all the mapping, then all the folding. It's very straight forward, but it's also eager, which isn't great when we would rather be working with a lazy object.

The new addition to Query is the Stream. A Stream is any iterable object which might prefer to be read lazily. Here's an example that I'm playing with, using some new improvements to the FileSystem library as well:

var f = new Rosella.FileSystem.File("foo.txt");
var result = Rosella.Query.as_stream(f)
                .take(5)
                .filter(function(l) { return l != null && length(l) > 0; })
                .map(function(l) { return "<<" + string(l) + ">>")
                .data();

I've updated Rosella.FileSystem.File to be iterable. The default File iterator reads the file line by line. This is a new feature and isn't really configurable yet. In this example, we create a Stream from the File object. We take the top 5 lines from the file, remove any empty lines, and surround the remainder in << >> brackets. The best part is that we read the file lazily. This example only reads the top 5 lines of the file, it does not read the entire text of it. That's a big help if we have a huge file, or if we have something like a long-lived pipe that is spitting out an endless sequence of data. Another thing that is different about Streams is that they are interleaved. To see what I mean, if the methods above printed out their names when invoked, we would see this pattern:

take filter map
take filter map
take filter map
take filter map
take filter map
data

Where the first example did all the maps first and all the filters second, this example does them each one at a time for each input data item. Some of the methods which are on a normal Queryable aren't present on Stream, and some of the Stream methods aren't lazy. Some of them need to be eager, like .data(), .sort() or .fold().

If you want an updated harness for your project, it's easy to get one. Even easier than copy+pasting the code from above. If you have Rosella installed, you can automatically create a harness from a template. run this command at your terminal:

rosella_test_template harness winxed t/ > t/harness

That's all you need, and you'll get a spiffy full-featured harness which takes advantage of all the new features I've been working on. If you prefer your harness be written in NQP instead of Winxed, just change the "winxed" argument above to "nqp" and you get that.

I'm going to work on an installable harness binary so you can just use one without needing to create your own harness. I don't have it yet, but it will not be too hard to make.

Harness Cleanups

The harness is basically a huge iterator. You set up a bunch of tests organized into a list of test runs. The harness iterates over each test run, iterates over each test, gets the output, and iterates over the lines of text in that output to get results. Then it iterates over all test runs, iterates over all result objects to get the results display to show to the user. This sounds like a perfect use for Query functionality, doesn't it? That's exactly what I thought, anyway. I reimplemented several parts of it using Query and the new Stream object. The input is set up as a stream over a pipe, and the TAP parsing is implemented as a stream of tokens from a String.Tokenizer. Combine those changes with some refactors, fixing abstraction boundaries, and an eye towards test coverage, and the new code is much prettier than the old code.

What most affects users is that harness code can now be cleaner. Here is what a simple harness used to look like in Winxed:

function main[main]() {
    var rosella = load_packfile("rosella/core.pbc");
    using Rosella.initialize_rosella;
    initialize_rosella("harness");
    var factory = new Rosella.Harness.TestRun.Factory();
    var harness = new Rosella.Harness();
    var view = harness.default_view();
    factory.add_test_dirs("Winxed", "t", 1:[named("recurse")]);
    var testrun = factory.create();
    view.add_run(testrun, 0);
    harness.run(testrun, view);
    view.show_results();
}

Here is what a new one looks like:

function main[main]() {
    var rosella = load_packfile("rosella/core.pbc");
    var (Rosella.initialize_rosella)("harness");
    var harness = new Rosella.Harness();
    harness.add_test_dirs("Automatic", "t", 1:[named("recurse")])
        .setup_test_run(1:[named("sort")])
    harness.run();
    harness.show_results();
}

Not too bad for 6 real lines of code! The new "Automatic" test type reads the she-bang line ("#! ...") from the test file to determine how to execute it. If you want to specify a particular language like "NQP" or "Winxed", you can still do that too. Notice also that we can sort files by filename too, if we pass in that parameter to .setup_test_run. At the moment, sorting is only alphabetic and per-run only. We don't shuffle files between runs.

Harnesses using the older-style should all still work. I've tried as well as I can to keep the code backwards compatible. If you have a harness that doesn't work anymore after updating Rosella it's a bug and I would love to hear about it. Also, all the same capabilities are there: The ability to substitute a custom View, the ability to break files up arbitrarily into test runs, the ability to specify custom subclasses of TestRun or TestFile or other stuff like that too, if you need some custom semantics.

After these rewrites, the version of the Harness library is 3. I don't know if anybody follows along with these per-library version numbers, but it is a decent point of reference. I don't expect to be making any large changes to this library again for a while.

File Iterators

I've implemented a very quick and basic iteration facility for files as part of the Rosella FileSystem library. The iterator type I have so far is a basic line iterator, calling the .readline() method on the given handle until EOF.

There are two ways to use the new FileIterator class: Iterate over a Rosella File object directly, or create an IterableHandle object over an existing low-level handle.

// Iterate over a File object
var file = new Rosella.FileSystem.File("foo/bar.txt");
for (string line in file) {
    ...
}

// Make an Iterable Handle
var fh = new 'FileHandle';
fh.open("foo/bar.txt", "r");
var ih = new Rosella.FileSystem.IterableHandle(fh);
for (string line in ih) {
    ...
}

That second option is actually kind of neat, because you can use it over any Handle object: FileHandle (including standard input and pipes), StringHandle, Socket, etc.

If you really want to be tricky, you can do what I do in the Harness library and create a stream over a handle and really do a lot of cool stuff:

var fh = new 'FileHandle';
fh.open("foo/bar.txt", "r");
var ih = new Rosella.FileSystem.IterableHandle(fh);
var stream = Rosella.Query.as_stream(ih);
stream
    .take(5)
    .filter(function(l) { return l != null && length(l) > 0 && substr(l, 0, 1) != "#"; })
    .project(function(l) { return split(";"); })
    .foreach(function(string s) { say("Look at this: " + s); })
    .execute();

Again, this is a contrived example, but it should become apparent what kinds of stuff you can do with this.

I don't have directory iterators yet. That is something I haven't needed yet, but for which I can see some uses.

String Tokenizer Iterators

I don't know why I didn't think about it earlier, but now Tokenizers are iterable as well. If you have a tokenizer, you can iterate over them in two different ways:

var tokenizer = new Rosella.String.Tokenizer.Delimiter(",");
tokenizer.add_data("a,b,c,d,e,f");
for (string field in tokenizer) {
    ...
}

tokenizer.add_data("g,h,i,j,k,l");
for (var t in tokenizer) {
    ...
}

In the first, we do a shift_string operation which returns the raw string data. In the second we do a shift_pmc operation which returns the Token object. The Token contains some information like the type of token, some custom metadata, etc.

And of course, since you can iterate over them, you can use a Stream:

var tokenizer = new Rosella.String.Tokenizer.Delimiter(",");
tokenizer.add_data("a,b,c,d,e,f");
var stream = Rosella.Query.as_stream(tokenizer);
var new_data = stream
    .map(function(t) { return t.data(); })
    .filter(function(s) { return s != "b" && s != "e"; })
    .fold(function(a, b) { return sprintf("%s,%s", [a, b]); })
    .next();

Upcoming Changes

The new Harness library is making me pretty happy. I'm working on tests now, because the library has never had good test coverage and is suddenly much more testable than it ever has been. Considering Harness is a central part of my TAP testing strategy, it's kind of embarrassing that it has never been well tested itself. I am working on test coverage in a branch, and will probably be merging that to master soon. I don't expect to make any big changes to the library for a while after that.

The Stream class is not well fleshed-out yet and is absolutely untested besides the tests for its use in Harness. I need to finish up with a few of the features that I haven't needed yet, and then test it all. There are a few tweaks I might want to make to the way it works, but for the most part I am pretty happy with how it turned out and how fun it is to use.

The FileIterator and Token Iterator types I mentioned are even newer and less mature than Stream is, and need some serious review. They've been useful tools to get me to this point, but they can definitely stand to be improved in non-trivial ways. I've got some big refactors of the String library planned in the future, so if anybody has any requests for features now is a good time to mention them.

In my testing of Harness, even though it's not complete yet, I've already found a few changes that need to be made in the MockObject and Proxy libraries as well. I plan to take a good hard look at those things to make sure they are up to the level I expect them to be. Also, I have a few other unstable libraries floating around that need attention, and could potentially become stable if I like where they are going.

13 Sep 2011 7:00am GMT

10 Sep 2011

feedPlanet Parrot

Andrew Whitworth: Parrot, the Smoke Clears

For anybody who missed it, Parrot Architect Christoph sent an email to the parrot-dev mailing list suggesting that things were not going well for the Parrot project and we need to make a few changes. Specifically, we need to become faster and more nimble by ditching the deprecation policy, being less formal about the roadmap, and being more focused on making Parrot better. Later, Jonathan "Duke" Leto sent out an email talking about how we need to have a concise and clear vision to pursue, to prevent us from getting stuck in the same kind of swamp that we're trying to pull ourselves out of. He had some good ideas there that need to be addressed.

That's right, go back and read it again: We're ditching the deprecation policy. I'll bring the champagne, you bring the pitch forks and lighter fluid. It's time for celebration. At least, it should be.

After the email from cotto went out, things went in a direction I didn't expect. People started getting angry, and some of that anger was directed towards Rakudo. I think it was misdirected. Rakudo isn't the problem and never has been. The problem was the deprecation policy and some of the related decisions that have been made with it over time.

The thinking goes, I think, something like this: The deprecation policy was bad. Rakudo expected us to do what it said and that we promised to do. Therefore, Rakudo must have been bad also. I'm oversimplifying, of course.

Rakudo has their own thing going on. They have goals, and they make long-term plans and they have developers and they have dependencies and all the other stuff that an active software project has. Parrot is a pretty damn big part of their world, and knowing what Parrot is doing or what it plans to do and at what times is important for them. If Parrot has a policy that says "we guarantee certain things cannot change faster than a certain speed, or more often than certain limited times", they start to make plans around those guarantees and start to organize themselves in a way to take advantage of that. It's what any project would do.

Imagine, as a Parrot developer, that the GCC developers sent out a mass email tomorrow that said the equivalent of "Oh, we're not supporting the C89 standard anymore, we're only going to be compiling a custom language similar to C but with some new non-standard additions, but without some of the trickier, uglier parts of the standard. Nothing you can do about it, so get to work updating your code if you want to continue using GCC". We'd have some work to do, I'm sure, and we probably wouldn't be too happy about it. Rakudo on the other hand knows that Parrot isn't nearly so mature or stable as GCC, and has a lot of improvements to make. They might not always get credit for it, but they have been pretty patient with Parrot over the last few years, even when we probably didn't deserve so much patience.

Some people have said that Rakudo has been an impediment to Parrot development, or that they are a reason why Parrot has problems and why the pace of development has been so slow. I think that's a short-sighted sentiment. It's not Rakudo's fault that they expect us to mean what our official policy documents say. It's also not their fault that we put together the deprecation policy in the first place, or that we've implemented it in the particular way we have over the years. In short, whatever negative feelings some parrot devs think they have about Rakudo is just a smoke screen. The way Parrot and Rakudo interact is the symptom. There are larger cultural aspects at the root of the problem, and the deprecation policy was a large part of that.

Rakudo developers, by and large, want Parrot to develop and improve faster. I haven't spoken to a single Rakudo developer who was unhappy to see the deprecation policy go. Most of them are ecstatic about the change. It's hard to say that these people somehow want to sabotage us, or delay us, impede us, or whatever else. The things that Parrot needs (better performance, better encapsulation and interfaces, better implementations, better focus) are all things that are going to benefit Rakudo as well. This is what they want too. We're always going to quibble over details, but in general they want from us what we want from ourselves. 95% of the improvements we need to make for Rakudo are going to benefit other languages as well. The other 5% can be negotiated.

Parrot and Perl6 have a pretty long and interesting history together. Unfortunately, that history hasn't always been pretty. Parrot was originally started as the VM to run Perl6. The idea of running multiple dynamic languages in interoperation harmony, including early plans to support later versions of Parrot 5, came later but eventually eclipsed the original goal in importance. You don't have to look far, even in some of the subsystems that have been most heavily refactored in recent months, to see Perl-ish influences in the code. Sometimes those influences are far from subtle. You also don't have to look too far to find instances of subsystems that were designed and implemented (and redesigned, and reimplemented) specifically to avoid doing what Perl6 needed.

Even in subsystems where the original goal may have been to support the needs of Perl6, many of those were designed and developed before people knew too well what Perl6 needed. There was a lot of guessing, and a lot of attempts made to become some sort of hypothetical, language-neutral platform that would some how end up supporting Perl6 well, without ever taking the needs of Perl6 into account specifically. It's like throwing the deck of cards in the air and hoping that they all land in an orderly stack. It's almost unbelievable, and thoroughly disappointing to think that the "Perl6 VM" would do so little over time to address the requirements of Perl6 and keep up with its needs as the understanding of those needs became more refined.

Around the 1.0 release Perl6 moved to a new separate repository which severely decreased bandwidth in the feedback loop. Whether this was a good move in terms of increased project autonomy, or a bad move in terms of decreased collaboration is a matter I won't comment on here. After the two projects separated, Parrot added a deprecation policy which guaranteed that our software couldn't be updated to reflect the maturing Perl6 project as it gained steam.

The short version goes like this: Parrot was supposed to be the VM to run the new Perl6 language. At few, if any, points in the project history did Parrot focus strongly on the needs of Perl6. Now, a decade later, people act shocked when Parrot doesn't meet the needs of Perl6 and meet them well. This, I believe, is the root of the problem.

Look at other VMs like the JVM and the .NET CLR. The JVM was developed with a strong focus on a single programming language: Java. When the JVM became awesome at running Java, and became a great platform in its own right, other languages like Clojure, Groovy and Scala started to pop up to take advantage. This is also not to mention the ported versions of existing languages that also found a home there: Jython and JRuby are great examples of that. The .NET CLR was set up with a focus on the languages C++, C# and VisualBasic.NET. Once the CLR became great at running these, other languages started to pop up: F#, IronPython, IronRuby, and others.

Those other VMs became great because they picked some languages to focus on, did their damndest to make a great platform for those, and then were able to leverage their abilities and performance to run other languages as well. Sure, we can always make the argument that Scala and Groovy are second-class citizens on the JVM, but that doesn't change the fact that both of those two run better on JVM, even as second-class citizens, than Perl6 runs on Parrot.

Somewhere along the line, the wrong decision was made with respect to the direction Parrot should take, and the motivations that should shape that direction. We need, in this time of introspection and reorganization, to unmake those mistakes and try to salvage as much of the wreckage as possible.

It should be obvious in hindsight where mistakes were made, and how we ended up in the unenviable situation we are in now. This isn't to say that the people who made those decisions should have known any better. At the time there were good-sounding reasons all around for why we needed to do certain things in certain ways. Hindsight is always clearer than foresight. It's easy to say that Rakudo is to blame because Parrot is filled with half-baked code that should have been good enough for Perl6 but never was, and then a deprecation policy that Rakudo expects to be followed. It's easy to misattribute blame. I understand that. What we shouldn't do is keep following that line of logic when we know it's not true. It's not correct and we need to get passed it. Set it aside. Put it down. Walk away from it.

Look at the example of 6model. I don't know what the motivations were behind the design and implementation of Parrot's current object model, but I have to believe that it was intended to either support or enable support through extensibility of the Perl6 object model. It failed on both points, and eventually Rakudo needed to implement its own outside the Parrot repo. It's an extension, which means it works just fine with Parrot's existing systems and doesn't cause undue interference or conflicts. 6model is far superior to what Parrot provides now, and is superior for all the languages that we've seriously considered in recent months: Cardinal (the Ruby port) was stalled because it needed features only 6model provided. Puffin (the Python port) needed 6model. Jaesop, my new JavaScript port, is going to require 6model because the current object model doesn't work for it. These represent some of the most important and popular dynamic languages of the moment, and all of these would prefer 6model over the current Parrot object model by a wide margin. So ask yourself why the Rakudo folks were forced to develop 6model elsewhere, and why Parrot hasn't been able to port it into its core yet. Ask yourself that, and see if you can come up with any reasonable answer. I can't find one, other than "oops, we screwed up BIG".

6model should have been developed directly in the Parrot repo. Everybody knew that our object model is garbage and that 6model was going to be a vast improvement. Maybe we didn't know how much it would improve, but we knew that the amount would be any. But instead we had a policy that effectively prevented that kind of experimentation and development, and a culture that claimed doing things the Perl6 way would prevent us from attracting developers from other language camps. Again, despite the fact that the developers of other languages on Parrot desperately wanted an improved object model like 6model, we basically made it impossible.

And then because of that mistake that was made, if we finally want to get the real thing moved into Parrot core where it belongs, we have to spend some significant developer effort to do it. Of course Rakudo already has 6model working well where it is, so moving 6model into Parrot core is listed as "low priority" by the Rakudo folks. Not that we can't do it still (and we will do it), but why would they prioritize moving around something they already have? We shot ourselves in the foot, but the bullet ricocheted a few times and hit us in the other foot, the hand, and then shoulder.

There's a sentiment in the Rakudo project that the fastest way to prototype a new feature is to do it in NQP or or in Rakudo and eventually maybe backport those things to Parrot. That's a devastating viewpoint as far as Parrot devs should be concerned, and we need to do everything we can to change that perception. It's extremely stupid and self-defeating for us to hold on to ugly, early prototypes of Perl6 systems and not jump at the ability to upgrade them to the newer versions of the same Perl6-inspired systems where possible. This is especially true when there are no other compelling alternatives available, or even clear designs for alternatives. It's also extremely stupid of us to make it harder for people to improve code that have frequently been referred to as "garbage".

There is nothing in Parrot that is so well done that if we were asked by our biggest user to change it that we shouldn't take those suggestions seriously. In most cases, we should take those suggestions as command. We're VM people and maybe sometimes we might know better (or think we know better) or at least think about things differently. We can have the final say, but we should take every suggestion or request extremely seriously. Especially when those requests come from active HLL developers and Especially when those developers are part of a project like Rakudo.

We should be much more aggressive about moving our object model to 6model. We should be very aggressive about moving other Rakudo improvements, in whole or in part, into Parrot core. Things like the changes required by the argument binder, or the new multi-dispatcher. We should also be very aggressive about having other such radical improvements prototyped directly in Parrot core, especially where we don't have an existing version, or where our version is manifestly inferior. Parrot core is where those kinds of things belong. Of course, we need to keep an eye towards other languages and make tweaks as appropriate, but we need to pursue these opportunities when they are presented.

dukeleto sent an email to the parrot-dev list in follow-up, trying to lay out some ideas for a new direction and a new vision for Parrot. Some of his ideas are good, but some need refinement. For instance, he says that a good goal for us would be to get working JavaScript and Python compilers running on Parrot and demonstrate painless interoperability between them. I do agree that this is a great goal and could bring Parrot some much-needed attention. However, it can't be our only goal.

Right now Parrot has one major user: Rakudo. There's no way around it, in terms of raw number of developers and end users, they are the single biggest user by a mile. No question. For us to put together a vision or a long-term roadmap that doesn't feature them, and feature them prominently, is a mistake. There may come a time in the future when they decide to target a different VM instead of Parrot. That time might even be soon, I don't know and I can't speak for them. What I do know is that so long as Rakudo is a user of Parrot, Parrot needs to do its damndest to be a good platform for Rakudo. A better vision for the future would be something like: "Make Parrot the best platform possible for Rakudo, but do so in a way that adequately supports and does not actively preclude implementations of JavaScript and Python". Talk about having a vision with sufficient focus!

I'm also not taking a jab at any other languages. Ruby, the Lisps, PHP and whatever other languages people like can be added to the list as well. JavaScript and Python are the two dukeleto mentioned and are two that I happen to think are as important as any of them, and are good candidates to be the ones we focus on. I would love to have a Ruby compiler on Parrot, and many others as well if people want to work on them.

If we increase performance by something like 50% and add a bunch of new features and Rakudo still leaves for greener pastures, at least we are that 50% faster and have all those new features. It's not like making Parrot better for Perl6 somehow makes it instantly worse for other languages. Sure we are going to come to some decisions where moving in one direction helps some and hurts others, but the biggest things on our roadmap right now don't require those kinds of hard decisions to be made. There is plenty of work to be done that brings shared benefit.

We want JavaScript. I know, because I'm working on it personally. We want Python too. Right now, we have Perl6 and we should want to keep it. We should want to do it as best as we can. Talk that involves distancing the two projects, or even severing the link between them is wrong and needs to stop. Saying things like "Well, the python people aren't going to like a VM that is too tied to Perl" is self-defeating. We don't have a working, complete Python compiler on Parrot and havent been able to put one together in a decade of Parrot development. We do, however, have a damn fine Perl6 compiler. If Puffin, a product of this summer's GSoC, continues to develop and becomes generally usable and more mature, the conversation changes. If Jaesop, my new and extremely immature JavaScript compiler comes around, the conversation changes. But until we have those things, we do have Perl6 and that needs to be something we focus on.

There are two directions we can logically go in the future: We can do one thing great, or we can do many things well. We can focus on Perl6 and be the best damn Perl6 VM that can possibly be, or we can improve our game to support multiple dynamic languages, but not be the best with any. Both of these are fine goals, and I suspect what we want to do eventually lies somewhere in the middle. We do know what path JVM and CLR took, and where that got them. We also know what path Parrot has pursued for a decade, and where we are now because of it. I think the course of action should be very clear by now: So long as Perl6 is our biggest user, it needs to be our biggest source of motivation. Parrot is not so strong by itself that it can afford to ignore Rakudo or become more separate from it than it already is. Parrot might be that strong and mature one day, but that day isn't today.

In direct, concrete language, this is what I propose: We need to focus as much effort as we can to be a better VM for Rakudo Perl6, including moving as much custom code from the NQP and Rakudo repos as possible into Parrot core to lower barriers and increase integration. We need to do that, trying to put priority on those parts of the code that are going to affect JavaScript and Python implementations and making the difficult decisions in those cases. When compilers for those languages become more mature and we start to run into larger discrepancies between them, we can start revisiting some decisions as necessary. Until then, Rakudo is our biggest user and beyond that they are friends and community members. We need to focus on their needs. We need to focus on making Rakudo better, and we need to focus on making Parrot better for Rakudo. Everything else will come from that, if we do our job well enough.

10 Sep 2011 7:00am GMT

03 Sep 2011

feedPlanet Parrot

Andrew Whitworth: Jaesop Stage 0 Progress

A few days ago I started the Jaesop project (formerly "JSOP") to explore creating a JavaScript compiler on Parrot using bootstrapping. After only a few days of real effort I'm getting pretty darn close to having a stage 0 compiler ready for use.

The Jaesop stage 0 compiler, called js2wxst0.js translates JavaScript code to Winxed. It is not a full JavaScript compiler; instead it's a useful subset of JavaScript which can be used for bootstrapping. Most of the syntax is supported, and the object model has acceptably faithful semantics. What I don't have is complete support for all built-in object types and methods, or 100% complete syntax translation. Some things like the with keyword are not and will not be supported, for example. The compiler doesn't currently handle some common bits of syntax like try/catch, switch/case, or a few other things. Many of the basics like operators, assignment, variables, functions, closures, and basic control flow (for, while, if/else) are working just fine.

Of course, if it did everything and was perfect, we wouldn't call it "stage 0" we would just call it "the JavaScript on Parrot Compiler". The stage 0 compiler isn't the end goal, it's just a tool we're going to be able to use to make a better compiler. I'm not looking to make something perfect here, I'm trying to put together a bootrapping stage 0 as quickly as possible.

The stage 0 compiler architecture is very simple. The Jison parser outputs AST, which I've had to make only a handful of modifications to from the original Cafe source. Then, the AST is transformed into WAST, a syntax tree for creating Winxed code. Finally, the WAST outputs Winxed. Most of the code here is complete and working very well. Late this week I finished the basics of the object model, and then I updated the compiler to output correct code for the model, and just today I got the test suite working again with all the new semantics.

The test suite is up and running again, although it doesn't have nearly enough tests in it to cover the work I've done until this point. The suite has tests written in Winxed and also tests written directly in JavaScript. The former is for testing things like the runtime, the later is for testing parsing and proper semantics. I want to increase coverage in both portions, because I've been dealing with a lot of aggravating regressions here and there as I code, and I want to make sure things get better monotonically from here.

Getting the test suite to work with the real JS object model was a little bit tricky. To get an example of why, here is a test I had in the suite prior to todays hacking:

load_bytecode("rosella/test.pbc");
Rosella.Test.test_list({
    test_1 : function(t) {
        t.assert.equal(0, 0);
    },
    test_2 : function(t) {
        t.assert.expect_fail(function() {
            t.assert.equal(0, 1);
        });
    },
    test_3 : function(t) {
        t.assert.is_null(null);
    }
})

Basically, this was my first sanity test, to prove that I could call Rosella Test functions from JS code. Unfortunately, after I re-did the object model, this test was broken. It got broken because of a fundamental feature of JavaScript: methods are just attributes, except they can be invoked. So a call to this:

Rosella.Test.test_list(...)

After compiling to Winxed, looks like this:

var(Rosella.*"Test".*"test_list")(...)

The .* operator looks up an attribute by name. The var(...) cast pulls out the value of the attribute into a PMC register, and the parens at the end invoke it. Notice that Rosella.Test isn't an object, it's a namespace. So that code was broken. Also notice that JavaScript has a notion of a global scope. We haven't explicitly declared a variable named "Rosella", so Jaesop tries to do a global lookup:

var Rosella = __fetch_global("Rosella");
var(Rosella.*"Test".*"test_list")(...);

Also, inside the tests, the assertions are done with t.assert.equal(), etc. But that's clearly wrong too, for all the same reasons. In short, the code was broken.

After some fixing and refactoring, I have the test situation all sorted out. Here is the same test today:

var t = new TestObject();
test_list([
    function() {
        t.equal(0, 0);
    },
    function() {
        t.expect_fail(function() {
            t.equal(0, 1);
        });
    },
    function() {
        t.is_null(null);
    }
]);

The methods TestObject and test_list are defined in the test library as globals. TestObject is basically a JavaScript-ish wrapper around Rosella.Test.Asserter with all the same methods.

The test suite is working but does need to be expanded. I have a few more things to add to the compiler and runtime as well, which will be easier to do with better test coverage. I very much intend to have a working and usable Jaesop Stage 0 to release soon. Certainly it should be available by the Parrot 3.9 release, hopefully much earlier. With that available, I want to get started on Stage 1. Stage 1 doesn't have to happen at the same blistering pace. In fact, I think it would be beneficial for us to wait until Parrot has 6model support built in, so we can start making the "real" object model using those tools instead.

03 Sep 2011 7:00am GMT

26 Aug 2011

feedPlanet Parrot

Andrew Whitworth: New Rosella Libraries

The other day I quietly bumped the Rosella Template library to stable status. I've been living with the API for a while and am happy with it for the most part. I've also gotten documentation and unit tests up to a nice level and I felt pretty comfortable that it wasn't a buggy, incomplete piece of crap. I don't want to claim that it's perfect, but I'm pretty happy with it as a first attempt. There are a few internal bits that are awkward and difficult to test, but the overall public-facing form of the library is working well enough.

It's extremely easy to write up things like unit tests and documentation when I have a templating tool that can produce files for those kinds of things semi-automatically.

I've talked a lot about the templating library in two previous posts, and am planning a separate post for it later this week, so I won't go into details about it here as well. As I start putting together more tools that use it, I'll show those off so the reader can see the new library in action.

Instead, today I want to talk about some of the new Rosella libraries I have been playing around with. Some or all of these might become a stable part of Rosella some day too, but right now they aren't quite ready for prime time.

Rosella.Assert

Rosella.Assert, formerly known as "Contract" is a library for debugging. It provides a few interesting tools: Runtime assertions, debug logging, and contracts. All of these features read a global flag to determine if they are on or off. If off, calls to the various assert, debug, and contract routines all do nothing. The calls themselves aren't completely removed, but they do short-circuit and exit early with no side-effects. Several people, especially GSoC students have asked for these kinds of debugging routines, so it's about time somebody added them.

The library basically does nothing unless you turn it on, so it won't interfere with your code at all unless you enable it. To turn on the Assert library, you do this:

var(Rosella.Assert.set_active)(true);

Without that line of code, most calls in the Assert library do nothing at all. The calls are still made, however, but they check the flag and immediately return if it's not activated. Winxed does do dead code elimination from conditionals with constant expressions. That dead-code elimination along with a new __DEBUG__ constant NotFound added recently, means that you can make assertions disappear entirely if you want:

if (__DEBUG__)
    var(Rosella.Assert.debug)("This message probably won't appear");

The Assert library provides assertions too. So you can start peppering your code with calls to the assert function:

using Rosella.Assert.assert;
assert(1 == 1);

Notice that the conditional here is executed before the assert function is called, so side-effects aren't invisible. However, a different form of assertion takes a predicate Sub, which won't be evaluated at all if the library is turned off:

using Rosella.Assert.assert_func;
assert_func(function() { return 1 == 1; });

It's not very pretty, but it does what you expect it to do. If the Assert library is enabled and the assertion condition fails, an error message and backtrace are printed. Otherwise, nothing happens. Likewise, you can make the code disappear entirely using the __DEBUG__ flag.

The new library also provides contracts in two flavors: Object interface contracts, and function contracts. In the first, we verify certain features of an object: Does it have the necessary list of methods? Does it have the necessary attributes? We can use the contract to verify that the object has the expected interface, or throw an assertion failure if not. In the second type, we can insert predicates into an existing function or method, to do pre- and post-call testing of values. For instance, to assert that the first argument of a call to method bar() on class Foo is never null, we can set up this assertion:

var c = new Rosella.Assert.MethodContract(class Foo);
c.arg_not_null("bar", 0);

likewise, to verify that the method bar() never returns a null value, we can do the following:

c.return_not_null("bar", 0);

That will automatically inject predicates into the method, which will be checked every time the method is invoked, if the Assert library is enabled. If a check fails, you get an exception and a backtrace. If you turn off the library, no checking happens and method calls happen like normal with no interference or slowdown.

One last detail, the Assert library integrates with the Test library, if you have them both loaded together. If you use Assert in a test suite using Rosella Test, you can use assertions and contracts as tests directly, and things printed out with debug() are printed out as normal TAP diagnostics. It's quite handy indeed, because if you have put the assertions and contracts into your code, you now have test code running inside your program, testing things that would otherwise be hard to get to. Setting up predicate assertions on methods in your code is a handy and useful replacement for mock objects, if mocks aren't your kind of thing.

This library has a lot of potential to be used as a debugging and testing aide, and I expect to be using it a lot in my own work once I get some of the last details sorted out.

Rosella.Reflect

Rosella.Reflect is a library for doing easy, familiar reflection. Basically, it's an abstraction over methods and tools already provided by Parrot's built-in types, but with a nicer interface. It is the same motivation as I had for the FileSystem library, which is like a much nicer veneer over the OS PMC and a handful of other lower-level file-manipulation details provided by Parrot. Right now Reflect is an early-stage plaything, but it's already looking nice and I have plenty more things to add to it.

With Rosella.Reflect, you can do things like this:

var f = new Foo();
var c = new Rosella.Reflect.Class(class Foo);
var b = c.get_attribute("bar");
b.set_value(f, "hello");        # Set Foo.bar = "hello"
var m = c.get_method("baz");
m.invoke(f, 1, 2, 3)            # f.baz(1, 2, 3)
var x = new Bar();
m.invoke(x, 1, 2, 3)            # Error, x is not a Foo

Basically, it provides type-safe object wrappers for classes, attributes and methods. It provides routines for iterating attributes and methods, dealing with the indirectly, and doing other reflection-based tasks too.

In the future I want to add tools for building classes at runtime, tools for exploring packfiles, namespaces, and lexicals, and doing a few other things. This library is pretty heavily influenced by some of the things Parrot user Eclesia has been doing with his Eria project. It gives a nice object-based way to interact with some things in Parrot that don't always have the most friendly interfaces.

This library is very very young and is mostly a prototype. I am looking for more things to add to it, and hope it will become more generally useful. It's complicated by the fact that the upcoming 6model refactors could radically change the way we do some types of reflection, so I don't want to reinvent any wheels.

Rosella.Dumper

Rosella.Dumper is a replacement for the Data::Dumper library that ships with Parrot. It uses an OO interface with pluggable visitors and configurable settings. It's very early in development but it's already much more functional and usable than Data::Dumper. With the new interface, you can do something like this:

var dumper = new Rosella.Dumper();
string txt = dumper.dump(obj);
say(txt);

The Dumper object contains several collections of DumpHandler objects, which are responsible for dumping out particular types of object. DumpHandlers are arranged into 4 groups: type-based dumpers that dump objects of specific types, role-based dumpers which dump objects that implement a given role, miscellaneous dumpers which are given the opportunity to dump anything else, and special dumpers for things like null and anything that falls through the cracks. By mixing and matching the kinds of things you want to see dumped, you can customize behavior. By subclassing the various bits, you can change behavior and output formatting.

This library is pretty straight forward, and is already pretty generally useful. I have a few decisions left to make about the API and some of the default settings, but it's useful and usable now and I've already employed it myself for several recent debugging tasks.

Rosella.CommandLine

This is the newest prototype library of all. So new that as of the publishing of this blog post I haven't pushed the code to github yet. Rosella.CommandLine is a library for working with the command line and command line arguments in particular. Basically, it's a replacement for the GetOpt::Obj library which comes with Parrot, along with a few other features for making program entry easier. To give a short example, here's a test program I've been playing with using the new library:

function main[main](var args) {
    var rosella = load_packfile("rosella/core.pbc");
    var(Rosella.initialize_rosella)("commandline");
    var program = new Rosella.CommandLine.Program(args);
    program.run(real_main);
}

function real_main(var opts, var args, var other) {
    ...
}

The main function initializes Rosella and loads in the CommandLine library. Then it creates a Rosella.CommandLine.Program object, to handle the rest of the details. The Program object takes the program arguments and automatically parses them out into a hash and some arrays based on some basic syntax rules. You can specify formats like you do in GetOpt::Obj if you want, or the library can parse them by default rules and just pass you the results. The run method of the Program object takes a reference to a Sub object to treat like the start of your program. It sets up a try/catch block to do automatic error catching and backtrace printing. Also, it can be used to dispatch certain arguments to different routines entirely, which is useful if you need to set up routines for printing out help or version information:

program.run(real_main, {
    "help" => function() { ... },
    "version => function() { ... }
);

The argument processing is done by a new Rosella.CommandLine.Arguments class, which mimics much of the behavior of GetOpt::Obj, but has a few subtle differences, which are partly because it's early in the implementation and partly because I like certain syntaxes better than others. Also, if you return an integer from your real_main routine, that integer will be the exit code of the process, which should be familiar for most C coders and their ilk. If you return no value, or if you use the exit opcode, things will continue to behave as you would expect. As with all Rosella libraries, there will be plenty of opportunity for subclassing and customization, so if you need something different from the provided defaults, it will be easy to change things.

I have a couple other projects in mind that I want to start playing with, some in Rosella, and some that intend to use Rosella to build bigger things. I'll certainly share more information about whatever else I am planning in future blog posts.

26 Aug 2011 7:00am GMT

Andrew Whitworth: JSOP, JavaScript On Parrot

I was looking at the CorellaScript project the other day, and wanted to try to tackle the same problem in a different way. This isn't an insult against CorellaScript, but I know a little bit more today than I did at the beginning of the summer and some of our tools have progressed along further than they had at the point when CorellaScript was designed and started. I wanted to see if we could convert to Winxed as an intermediary language, since Winxed is syntactically similar to JavaScript already in some ways, and since Winxed already handles most of the complicated parts PIR generation.

My idea, in a nutshell, is this: We create a JavaScript to Winxed compiler in JavaScript, using Jison and Cafe. Jison is an LALR parser generator written in JavaScript, and Cafe is an old project to use Jison to compile JavaScript into JavaScript. At first, compiling JavaScript to itself doesn't sound like such a great thing to do, but if we do some basic tree transformations and make a few tweaks to the generated code suddenly it's producing Winxed instead of producing JavaScript. Now all we need is an object model and a runtime, and we have a basic stage 0 compiler for JavaScript on Parrot.

Over the weekend, when we were trapped in doors because of the hurricane, I put some of these ideas to the test. By the end of the weekend I had a new project called JSOP (JavaScript-On-Parrot. It's a lousy name. I need a better one). Today, the stage 0 JSOP compiler is parsing a decent amount of basic JavaScript and has a small test suite. JavaScript doesn't have classes like other languages do, so I had to add in some support to Rosella.Test to handle javascript tests. Now that I've done that, we can use Rosella.Test to write tests for JavaScript. Here's an example test file that I just committed:

load_bytecode("rosella/test.pbc");
Rosella.Test.test_list({
    test_1 : function(t) {
        t.assert.equal(0, 0);
    },
    test_2 : function(t) {
        t.assert.expect_fail(function() {
            t.assert.equal(0, 1);
        });
    },
    test_3 : function(t) {
        t.assert.is_null(null);
    }
})

That test file compiles down to the following Winxed code:

function __init_js__[anon,load,init]()
{
    load_bytecode('./stage0/runtime/jsobject.pbc');
}

function __main__[main,anon](var arguments)
{
    try {
        load_bytecode('rosella/test.pbc');
        Rosella.Test.test_list(new JavaScript.JSObject(null, null, function (t) {
                t.assert.equal(0, 0); }:[named('test_1')], function (t) {
                t.assert.expect_fail(function () {
                t.assert.equal(0, 1); }); }:[named('test_2')], function (t) {
                t.assert.is_null(null); }:[named('test_3')]));
    } catch (__e__) {
        say(__e__.message);
        for (string bt in __e__.backtrace_strings())
            say(bt);
    }
}

Formatting is kind of ugly right now, but it does the job. Executing this file produces the TAP output we expect:

<jsop git:master> ./js0.sh t/stage0/01-rosella_test.t
1..3
ok 1 - test_1
ok 2 - test_2
ok 3 - test_3
# You passed all 3 tests

So that's not a bad start, right?

The stage 0 JSOP compiler is very simple, and I hope other people will want to hack on it. I've borrowed, with permission, code from the Cafe project to implement the parser. Cafe comes with a JavaScript grammar for Jison already made, and some AST node logic. I added a new tree format called WAST which is used to generate the winxed code. I modified the Cafe AST to produce WAST, and deleted all the other code generators and logic from Cafe.

It all sounds more confusing than it is. The basic flow is like this:

JavaScript Source -> Jison AST -> WAST -> Winxed

Winxed converts it to PIR, and execution continues from there.

So what still needs to be done? Well, lots! I've only implemented about 25% of a stage 0 compiler, so most of the syntax in JavaScript is not supported yet. I've only implemented enough to get a basic test script running (functions, closures, "new", string and integer literals, etc). Basic control flow constructs and almost all the operators are not implemented yet. I've also implemented a basic runtime, but I don't have any of the built-in types like Arrays yet, or most of the nuances of the prototype chain, etc.

The ultimate goal is a bootstrapped JavaScript compiler. Once we have stage 0 being able to parse most of the JavaScript language and execute it, we need to create a stage 1 compiler written in JavaScript. It can borrow a lot of code from Stage 0 (including the Jison parser). For that, we're going to need PCRE bindings, among other runtime improvements. When we can use Stage 0 to compile a pure JS stage 1 compiler, we self host and it's mission accomplished. We've got a long way to go still, but I think this is a promising start and I'm happy with the quick rate of progress so far. I'm looking for people who are interested in helping, so please get in touch (or just create a fork on github) if you want to help build this compiler.

26 Aug 2011 7:00am GMT

24 Aug 2011

feedPlanet Parrot

parrot.org: GSoC: Wrapping up, and some documentation

The hard "pencils down" date was Monday, so now seems like a good time for a blog post summarizing what I ended up completing.

I have DPDA generation and parsing working for LR(0) and SLR(1) grammars. I have the beginnings of a grammar specification DSL (a grammar, but no actions or tokenizer yet; it's in the dsl branch). I do not have support for LALR(k) grammars or general LR(k) grammars. I have not implemented generating code to parse grammars (as opposed to interpreting the DPDA in order to parse them).

read more

24 Aug 2011 11:19pm GMT

Tadeusz Sośnierz (tadzik): What is Production Ready?

"When will Perl 6 be production ready?" - they ask from time to time. I know the feeling, there was a time I wanted to know too, and after a year working on Rakudo, I can truly say,

I have no freaking idea!

I'd really like to tell you, seriously. If you ask #perl6, they will start tricking you into thinking that it's ready enough and they're actually using it, right? Tricky bastards. But, what do you actually ask for? What is this mighty Production Ready?

I dedicated some thinking to this today. What makes something Production Ready? I can think of two possibilities

  1. The creators declare it Production Ready
  2. People start using it in Production Environment

The first one is a bit tricky to achieve when it comes to Perl 6. As we know, Perl 6 is a language. How can language be Production Ready? Think, think. Is there another example of something which is rather a spec than an end-user product, and is either not declared as finished, or the spec freeze date is ridiculously far in the future? Right, it's HTML5. Spec is a draft, it's nowhere near finished, and neither of the implementation implement all of it. So what makes HTML5 production-ready? I don't think it's declared ready by its creators. It's that people didn't bother with official opinions and started actually solving problems with it. Took the existing implementations and made use of it. Therefore, we can safely assume that by "Production Ready Perl 6" we really mean "A Perl 6 Compiler I can use to get the job done". So what are the current compilers lacking for the majority of people?

Yes, I'm asking you. You don't really know, do you? You didn't even try them? It's just that people don't use them too often, so they're probably crap, right? Ok, there's some logic in that.

There is a possibility that Perl 6 is already capable of solving your problems. You should try it. But! Enough of the advertising business, I'm wondering here.

"So what is your Production Ready?", you may ask. What do I expect from Perl 6 before it will be Production Ready for me? It's not, I'm not gonna lie. It's solving my problems, it pays my bills, but it lacks this Something that will make it Purely Awesome. In my opinion, there are two major things we're missing:

  1. Speed. Not all things I write need to be blazingly fast, but what is the point of amazingly expressive language, if the bottleneck of the development process is program runtime?
  2. Access to Perl 5 modules from CPAN. Yes, I know of modules.perl6.org fairly well, believe me. Still, it will take ages, if not infinity to make it as awesome as CPAN is. Blizkost is a bridge between Perl 5 and Parrot and it's capable of running simple stuff already.

That's it. I can live without most of the things. But what I'm really looking for, is a better Perl 5. It needs CPAN, and it needs to be less slow that it is. I'm not looking for a C performance. I could live with Perl 5 performance here probably.

That's what I'm missing. And what is Your Production Ready?


24 Aug 2011 2:39pm GMT

22 Aug 2011

feedPlanet Parrot

parrot.org: End of GSoC, but not Time to Stop...

I've spent the last few days cleaning up my branch: adding documentation, checking that it passes the code standard tests, and trying compiling Rakudo nom. Don't get too excited, we can't compile nom directly to bytecode. Heck, it can't compile squaak directly yet. But I wanted to make sure that all my tinkering hasn't broken the original PIR generation path.

read more

22 Aug 2011 5:26am GMT

21 Aug 2011

feedPlanet Parrot

parrot.org: And So Ends the Flight of the Honey Bee

Actually, I suppose the flight has really just begun. It's true that GSoC is nearly at its end but, ironically enough, it doesn't really feel like the end but more of a new beginning. A new debug data format is in the works and it has so much potential!

read more

21 Aug 2011 3:32am GMT

20 Aug 2011

feedPlanet Parrot

parrot.org: GSOC 12: Coming to an end

While I didn't implement everything I thought I would, there is now a basic framework for bytecode generation in the nqp_pct branch. I'm uncertain if it should be merged... While the bytecode generation doesn't fully work, it doesn't interfere with the existing usage of PCT and does have the nice feature that PAST::Compiler is now written in NQP for ease of hacking. I'll leave that up to the rest of the community to decide. The rest of this blog post is the contents of docs/pct/bytecode.pod, which I hope will be helpful if anyone want to explore what I've been working on all summer.

read more

20 Aug 2011 6:48pm GMT