The Thrift State of Code - Direction Change

Updated 25 Feburary 2014

_Getting the thrift code operational to a stable state via the cglib generated output, is proving to be too time-intensive. Freezing the c_glib code for now, and moving ahead with c++ as the preferred language of choice. There are multiple advantages here, not just directly related to the support level available and state of code. As it currently stands I have client and server code generated and working with the defined ebrain protocol. This changes direction slightly, I will be pushing the code to git in a bit.

Aiming to build this as a library which can be linked in main binary. Will also be keeping the server and client code (specific to the protocol) as a buildable stand alone too. This will help in testing and improving the protocol(s) independently as well. Options for compiling either-or/both to be provided in the Makefile.

This method brings the roadmap back in line with the Future directions planned an sets things up very well moving ahead. We can look to split the code base up into back-end and front end operations, a move we have been planning for awhile now. Amongst the Major benefits is properly community tested code, more realiabilty. The servers in use (from simple, to threaded) will serve us well with this approach. I will return to the c_glib code base at some point, once time permits and hopefully in future as this language support increases, we can return this part of the ebrain protocol back to c. For now C++ is a great alternative, fits very well into our design and little or no drawback.

-- Below is the old post -- related to c_glib generated code --

Been awhile, since I mentioned whats been going on, another follow up is due. For those that have been following, thrift was and is another major piece of the puzzle

Since I began this experiment with thrift, version 0.9 has been released, and is what I am working with, at the moment.

Here is a dump of notes I have collected along the journey. This will be a living post which I will keep updating as the thrift code reaches maturity..

this is the current ebrainpool IDL file, our first messaging system, to replace the application menu sharing information

:: thrift code protocol :: eBrainPool IDL

    struct applist {
     1: required string name,
     2: required string command,
     3: optional string comment,

    exception applistfailure {
     1: string failmsg,

    service announcelist {
     bool sendlist(1:applist apps) throws (1:applistfailure ouch);

simple form to generate:

    thrift --gen c_glib ebp_proto.thrift

it will create a dir 'gen-c_glib' and dump the generated c code in there. That is what you include in your own code.

I have yet to find a straight forward working example of server code with c_glib, or rather anything more than a scattered collection of posts and some code.

I hope this is a digested version of that, and once I am happy with state of the server code, will post it.

Creating the Server --Single Threaded (Incomplete, in progress) As I currently understand it,

Socket -> Server Socket

Transport -> -> Create -> Open -> Listen

Processor-> act on the data flow: here is really where you start to work on your custom code/functions.

using the simpleserver code, this is abstracted away. define a basic processor, which does nothing, to get us up and running..

The pain factors: with c_glib a lot more of the heavy lifting needs to be written in. This one took a while to understand properly. The C++ code for example, does a great job of this.

the files generated:

    -- :: main func/code autogenerated --

    gboolean announcelist_client_send_sendlist
    gboolean announcelist_client_recv_sendlist
    gboolean announcelist_client_sendlist

    -- :: main func/code autogenerated --

most of these are utilised internally, when we use the specified thrift functions, like writing and reading data, checking for the type and so on.

I have a sample server code running, creating the socket & transports, and listening for incoming connections... The processor, handling the data being transported

Previous Post Next Post