Showing posts with label python. Show all posts
Showing posts with label python. Show all posts

Monday, October 6, 2008

Singleton Pattern for Python

In software engineering, the singleton pattern is a design pattern that is used to restrict instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system. Sometimes it is generalized to systems that operate more efficiently when only one or a few objects exist.


class Singleton(object):
instance = None
lock_obj = threading.RLock()

def __new__(self, *args, **kwargs):
return self.get_instance( *args, **kwargs )

@classmethod
def get_instance(clazz, *args, **kwargs):
with clazz.lock_obj:
if clazz.instance is None:
clazz.instance = object.__new__(clazz, *args, **kwargs)
clazz.instance.init(*args, **kwargs)
return clazz.instance

def __init__(self, *args, **kwargs):
super( Singleton, self ).__init__(*args, **kwargs)


def init(self, *args, **kwargs):
pass

class Test(Singleton):
def init(self, *args, **kwargs):
#do initializations here, not in __init__
pass

print Test()
print Test()


The output of printing the value of the result of the "creating" the 2 Test objects will show that actually only one Test object was created.

This pattern is needed very often in Python, although when it is needed it addresses the problem very handily.

Note: this is code based off some searching earlier this year. I no longer remember exactly where I found the original that I based this work off of.

Friday, October 3, 2008

EVE Online's Stackless Architecture

I just saw this article about EVE Online's Server Model. Very interesting and informative read, and a testament to Stackless's reliability and usability.

Weekly Roundup

This week has seen some interesting news in the Python world.
  • In particular, the release of Python 2.6, with support for json and a new multiprocessing paradigm.
  • In my research for other projects, I ran across an older article by the ineffable effbot about thread synchronization.
  • One of the more interesting, and sometimes confusing, parts of Python is list comprehensions, explained in good detail in this article on comprehending list comprehensions.
  • I don't often need to write a parser, but the ideas presented in the "Zen of Parsing" seem very useful for those rare instances.
On the other hand, not everything I did this week was programming related.

Thursday, October 2, 2008

Perspective Broker Authentication for Stackless Python



Twisted Matrix's Perspective Broker is, to me at least, the main compelling reason to use the entire framework. It's also an area that is lacking in documentation that is usable to a newbie to the framework.

One of the things provided by the Perspective Broker is an authentication framework, based around the concept of an avatar. We're going to create a Perspective Broker authenticator that is more complex than what is presented in the Twisted Matrix documentation.

In one of the projects I worked on, we needed to have a central authentication server, to which all the other object brokers would authenticate against. Due to the way that Twisted operates, a user needs to authenticate against the broker that they are currently connected to. We need to build a authenticator that can authenticate for itself, or if needed, recursively authenticate against another server.

To achieve this goal, we run into the first hurdle, Twisted doesn't pass a meaningful password around. This changes the order of authentication slightly then, as we now need to authenticate against the main authenticating server first. That server then gives us a cookie, known as authentication_cookie in the code below. This cookie is good for this login session with the main authenticator. When connecting and authenticating against any other broker in the system, we pass the username and this cookie as our credentials.

class UsernamePasswordDatabaseChecker:
interface.implements(checkers.ICredentialsChecker)

credentialInterfaces = (credentials.IUsernamePassword,
credentials.IUsernameHashedPassword)

def __init__( self, login_service ):
self.login_service = login_service

def passcheck( self, account, credentials ):
if not account:
return failure.Failure( NoSuchAccount( ) )
c1 = credentials.checkPassword( account.password )
c2 = credentials.checkPassword( account.authentication_cookie )
if c1:
return login_server.MasterAuthenticationStatus( account.username,
account, 1 )
elif c2:
return login_server.MasterAuthenticationStatus( account.username,
account, 0 )
elif not c1 and not c2 and account.authentication_cookie:
return failure.Failure( AuthenticationCookieMismatch( ) )
elif not c1 and not c2 and not account.authentication_cookie:
return failure.Failure( InvalidPassword( ) )
return failure.Failure( NoSuchAccount( ) )

@reactor.deferred_tasklet
def store_auth_cookie( self, auth_status ):
bcr = reactor.blocking_tasklet( self.login_service.callRemote )
return bcr( "store_authentication_cookie", auth_status )

@reactor.deferred_tasklet
def requestAvatarId(self, credentials ):
if hasattr( self.login_service, "callRemote" ):
bcr = reactor.blocking_tasklet( self.login_server.callRemote )
acct = bcr( "get_account", credentials.username )
self.passcheck( acct, credentials )
status = bcr( "account_status", acct )
rv = self.store_auth_cookie( status )
return rv
else:
rv = self.login_service.authenticate_account( credentials )
return rv

We use one of the function decorators from when we first started integrating Twisted into Stackless python, the deferred_tasklet. If you haven't read that article, the deferred_tasklet decorator wraps the function to run in a seperate Stackless tasklet, while still returning a Deferred that can be used by the rest of the Twisted machinery.

The referenced member login_service is a straight-forward object that implements the function authenticate_account. If the credentials it is passed are good, it returns an object -- this object is the avatar required by the Perspective Broker. If the credentials are bad, it returns None, which then gets translated into an appropriate remote exception.

The core ingredient in this is whether or not UsernamePasswordDatabaseChecker's member login_service has a member method callRemote. In this scenario, we make remote calls to do the authentication. We use another one of function decorators, blocking_tasklet, to do this so that our code can remain synchronous in style.

All in all, using Stackless tasklets to implement this setup results in less lines of code, and code that is straight-forward and easy to understand. The purely Twisted incarnation of this setup result in a crazy amount of nested callbacks that were very difficult to follow during initial testing, let alone a few months later.

The examples for login_service can be provided, if anyone wants, I just need to dig through the CVS repository to find them.

Tuesday, September 30, 2008

A quick update...

I planned on having an entry tomorrow showing a more in-depth example of combining Twisted with Stackless python, but it's taking a little longer than expected to get all the pieces together. The example will be using what is in my opinion the most useful part of Twisted: the Perspective Broker. I have several working systems that utilize it, but pulling out just enough to demonstrate how to use the PB with Stackless is proving a little troublesome.

It should be ready by Thursday.

Multi-Threaded Twisted / Stackless Integration




Another way to integrate Twisted with Stackless python, is to use multiple threads. One thread handles Twisted's reactor while Stackless tasklets run in at least one other thread. This lowers the deterministic nature of Stackless, but for certain conditions may be more effective than trying to integrate Twisted and Stackless into a single thread. Communication between the threads is handled through a channel, which according to the documentation for Stackless python is thread-safe.

the_channel = stackless.channel( )

def a_func( *args ):
print "a_func:", args
return args

def dispatch( d, func, *args, **kwargs ):
d1 = defer.maybeDeferred( func, *args, **kwargs )
d1.addCallback( lambda x: reactor.callFromThread( d.callback, x ) )
d1.addErrback( lambda x: reactor.callFromThread( d.errback, x ) )

For our example, we'll be calling a_func to run in the Stackless thread. This is handled through the helper function dispatch. The result of the function will be wrapped up in a Deferred. Through the reactor's callFromThread method we'll be able to fire the callback chain inside the thread running the main Twisted reactor loop.

the_channel is our cross-thread communication channel, through which the requests for function invocation will be passed.

def stackless_dispatcher( chan ):
while True:
try:
d, func, args, kwargs = chan.receive( )
t = stackless.tasklet( dispatch )
t( d, func, args, kwargs )
print threading.currentThread( )
stackless.schedule( )
except:
break

This is the main loop of the Stackless thread. This method loops until an error condition occurs -- in this simplified version that is enough. It blocks on the main channel, until it receives a function, function arguments, and a Deferred that will be fired upon function completion. It then creates a tasklet to run the function in.

def call_in_stackless( chan, func, *args, **kwargs ):
d = defer.Deferred( )
t1 = stackless.tasklet( chan.send )
t1( (d,func,args,kwargs) )
stackless.schedule( )
return d

This function is called from within the reactor's thread to cause a function invocation inside the Stackless thread. Because Stackless complains about deadlock when sending on channels, we have to create a tasklet to send on the main channel. The function, arguments, and a newly created Deferred are sent via the channel and the Deferred is returned from the function. Ultimately, this Deferred will have it's callback chain fired so at this point, traditional Twisted-style programming can continue.

def test( chan ):
print threading.currentThread( )
d = call_in_stackless( chan, a_func, 1 )
d2 = call_in_stackless( chan, a_func, 1, 2 )
dl = defer.DeferredList( [ d, d2 ] )
dl.addCallback( lambda x: reactor.callLater( 0, reactor.stop ) )
def ender( x, chan ):
t = stackless.tasklet( chan.send )
t( x )
stackless.schedule( )
dl.addCallback( ender, chan )

reactor.callInThread( stackless_dispatcher, the_channel )
reactor.callLater( 0, test, the_channel )
reactor.run( )

This is just a test of the premise. It should operate as expected, with a_func being invoked twice inside the Stackless thread before the reactor is stopped. We force a shutdown of the Stackless loop by passing a single argument through the channel -- since the receiving side is expecting a 4-part tuple this will cause an exception and stop the loop.

This form of integration does allow for some more concurrency than the previously discussion integration method. While we have to worry about Python's GIL (global interpretor lock) cutting down our actual concurrency, if the application is heavily I/O-bound this is not much of an issue since the GIL is released whenever control passes into the Twisted's reactor loop.

Saturday, September 27, 2008

Observer pattern for Python



The Observer pattern is mainly used to implement a distributed event handling system. The primary objective of this pattern is to provide a way to handle run-time one-to-many relationships between objects in a loosely coupled arrangement.

In this configuration, the Observable object doesn't know anything more about it's Observers than a very limited interface. The Observable needs to provide a wide interface for allowing other objects to gain access to it's current state.

The event from the observable object's point of view is called notification and the event from the observers' point of view is called update.


class Observable( object ):
def __init__( self, *args, **kwargs ):
super( Observable, self ).__init__( *args, **kwargs )
self.__dirty = False
self.__observers = weakref.WeakKeyDictionary( )

def attach_observer( self, obs ):
if obs not in self.__observers:
self.__observers[obs] = 1
return self

def detach_observer( self, obs ):
if obs in self.__observers:
del self.__observers[obs]
return self

def set_dirty( self, d ):
self.__dirty = d
return self.__dirty

def is_dirty( self ):
return self.__dirty

def notify_all( self ):
for observer in self.__observers.keys( ):
observer.observer_update( self )

def notify_check( self ):
if self.is_dirty( ):
self.notify_all( )
self.set_dirty( False )
attach_observer and detach_observer maintain the list of Observers that are interested in this object. After any change in state, notify_all should be called. If this state change is part of a larger transaction, the combination set_dirty and notify_check should be called.

If you're also using Stackless python, you may want to have notify_all use the event-loop mechanism we've previously discussed.

class Observer( object ):
def __init__(self, *args, **kwargs ):
pass

def observer_update( self, object ):
pass
The Observer object is very easy to implement. Really, only it needs observer_udpate defined since that method is called by Observable during notify_all. The observed object passes itself as the argument to observable_update so that the observer knows which of the objects it currently is observing has been updated.

Thursday, September 25, 2008

Event-based Programming for Python


Oftentimes, you need to have objects that communicate with each other via events. This is a very useful setup, for example, in a GUI -- where these events represent things like mouse clicks, key strokes, or button presses. That's not what I developed these classes for, since I was more interested in simulating things and the event system seemed like the most natural fit, but the recipe is still relevant to other event handling needs.

We're going to build upon earlier discussions, most notably about Stackless event loops by adding in some concrete examples of using that recipe.

The Observer pattern strikes again, as I'm defining the relationship between the event generator and the event listener as one of Observable and Observer. We'll make use of the channel_processor function decorator described in Stackless event loops.

class EventObserver( object ):
def __init__( self, *args, **kwargs ):
super( EventObserver, self ).__init__( *args, **kwargs )

@channel_processor
def event_transieve( self, *args, **kwargs ):
evt = kwargs.get( 'data', None )
self.process_event( evt )

def process_event( self, event ):
pass

def event_notify( self, event ):
self.event_transieve( event )

This is straight-forward enough. The only trickery (if you could call it that) is in the event_transieve method. And all that does is take whatever is passed as the keyword argument data and call the method process_event. In this base class implementation, that function does nothing.

One bit of niftiness does occur, however, when the event_transieve method is invoked. Through the use of function decorators (and therefore transparent to the calling client) this method actually spans across tasklets, granting some semblance of concurrency.


class EventObservable( object ):
def __init__( self, *args, **kwargs ):
super( EventObservable, self ).__init__( *args, **kwargs )
self.__event_observers = weakref.WeakKeyDictionary( )
self.__events = [ ]

def attach_event_observer( self, obs, level=1 ):
if obs not in self.__event_observers:
self.__event_observers[obs] = level
return self

def detach_event_observer( self, obs ):
if obs in self.__event_observers:
del self.__event_observers[obs]
return self

@channel_processor
def dispatcher( self, *args, **kwargs ):
data = kwargs.get( 'data', None )
wlist = []
for key in self.__event_observers.keys( ):
val = self.__event_observers[key]
seok = SortableEventObserverKey( key, val )
heapq.heappush( wlist, seok )
while len( wlist ):
obs = heapq.heappop( wlist )
obs( evt )

def dispatch_event( self, event ):
self.dispatcher( event )
return event
Now, we can safely ignore the attach_event_observer and methods -- they only exist to implement the Observer pattern. The only method we really care about at the moment is dispatcher.

In this method we simply loop over all the currently registered observers, invoking (ultimately) their event_notify method. If you don't see how that happens, just be patient and wait until we look at the SortableEventObserverKey helper class and it's definition of the __call__ method.

class SortableEventObserverKey( object ):
def __init__( self, kval, weight, *args, **kwargs ):
super( SortableEventObserverKey, self ).__init__( *args, **kwargs )
self.__value = kval
self.__weight = weight

def __cmp__( self, other ):
return cmp( self.__weight, other.__weight )

def __call__( self, event ):
return self.__value.event_notify( event )

def __repr__( self ):
return "%s, %s" % ( self.__value, self.__weight )

Now, I hate that I had to throw something like that into the discussion. The helper class only exists to make the comparison functions easier when using the heap queue. For anyone unfamiliar with heaps, a heap ensures that the highest weighted object is at the front of the queue and will be the first one taken out of the structure.

class EventParticipant( EventObservable, EventObserver ):
def __init__( self, *args, **kwargs ):
super( EventParticipant, self ).__init__( *args, **kwargs )
event_manager = kwargs.get( "event_manager", "events" )
self.event_manager = pytypes.get_event_manager( event_manager )

def generate_event( self, event_type, *data, **kwargs ):
evt = self.event_manager.create_event( self, event_type, *data, **kwargs )
return self.dispatch_event( evt )

Here's the easy class to implement. It defines the EventParticipant, which is both the Observable and the Observer. This is, utlimately, the class that I extend for my simulations since my program domain requires for the objects to both generate events and be interested in other object's events. Simply extending from this class gives you that ability in a nice, clean, and concurrent fashion (or, at least as concurrent as Stackless gets you).

Tuesday, September 23, 2008

Stackless Event Loops


This is a follow-on article to Stackless Python Meets Twisted Matrix. This time how to use function decorators to turn a ordinary looking function in a looping event dispatcher. Useful for the Observer design pattern.

Through the syntactical power of decorators, one can can convert any function into a continuously run event loop. This utilizes Stackless Python, which has been discussed in earlier articles on not only this web site, but many many others as well.

The premise behind this event loop is this: a tasklet runs and dispatches incoming "events" to a handler function. To the outside caller, it appears to be a regular function call, but the mechanisms provided by the decorator allow the execution of the "event handler" to be run in a seperate tasklet. If desired, this premise can be extended further to allow for the event loop to run in its own thread.

First, let's look at the class that does all the heavy lifting.
class ChannelProcessor:
def __init__( self, action ):
self.channel = stackless.channel( )
self.action = action
self.running = True
self.process( )

def stop( self ):
self.running = False
self.channel.send( 1 )

@blocking_tasklet
def __call__( self, *args, **kwargs ):
c = stackless.channel( )
self.channel.send( (c,args,kwargs) )
rv = c.receive( )
if isinstance( rv, failure.Failure ):
raise rv.value
return rv

@deferred_tasklet
def process( self ):
while self.running:
vals = self.channel.receive( )
if len( vals ) == 3:
c,args,kwargs = vals
d = defer.Deferred( )
d.addBoth( c.send )
_wrapper( d, self.action, *args, **kwargs )
else:
self.running = False

This code makes use of the decorators described in an earlier article, available here. As you will notice, the core of the event loop is contained in the process function, which runs in it's own tasklet (due to being decorated by the deferred_tasklet decorator). It doesn't matter if you aren't using Twisted for this code to work, although you will need Twisted installed for it to run (unless you change the mechanices of deferred_tasklet).

process
simply loops until told otherwise (via the stop method), receiving data from it's channel. If the data is a tuple of 3 items, it calls the original function (stored in the action member). Return value from the event handler is sent on a channel, which we received as the 1st element of the tuple.

An outside caller enters into this mechanism via the __call__ method. This method creates a new channel, and then passes that channel, and the parameters it was called with along the object's channel. It then waits for data to be sent back to it. After a quick look at the data returned, it either returns the data through the traditional means or raises an exception (if it received an exception).

Now, for the decorator:

@decorator
def channel_processor( f, *args, **kwargs ):
func_obj = None
if type( f ) == types.MethodType:
target = f.im_self
target_name = f.func_name + '_cp'
if not hasattr( target, target_name ):
func_obj = ChannelProcessor( f )
setattr( target, target_name, func_obj )
else:
func_obj = getattr( target, target_name )
elif not hasattr( f, "_cp" ):
setattr( f, '_cp', ChannelProcessor( f ) )
func_obj = f._cp
else:
func_obj = f._cp
return func_obj( *args, **kwargs )

Here, we create a ChannelProcessor object and stuff it into the calling function's member list (as a member named _cp). If it already exists, great. In any case, we then call the object, which will lead us into the __call__ method shown above.

A special case is made if we are an object's method, instead of a regular function. This does not happen in the regular use-case of a decorator (when using the @decorator_name syntax). It only happens when we do something like:

class A:
def __init__( self ):
self.func = channel_processor( func )

def func( self, *args ):
print "Here i am with arg list:", args

You use this method if you need to have each object having it's own tasklet that handles events. Using the standard decorator syntax results in each function having it's own event handling tasklet.

I tend to use the per-function methodology, but as usual, your mileage may vary.

Friday, September 5, 2008

Adventures in Webpage Content Injection.

I run several web sites and recently decided that I wanted to add some common text to the bottom of all the pages. Since I don't generate the content, it would be best if the server did this as it served the pages.

A quick search through Apache's module directory and I saw that mod_layout fit the bill ... in theory, at least. I had tried, in vain, for the last two days to get mod_layout working with my antiquated Linux server (fedora core 3), before I decided that I should really look into my other options -- especially since mod_layout hadn't been updated and from what I could see from forums, it's developer wasn't exactly interested in making it working with Apache 2.0+.

Then I saw a post about how mod_rewrite could be cajoled into doing this sort of a task. Essentially it does this by rewriting the request into a CGI call, passing the original requested file name as a paramter.

Like so:


RewriteRule /(.*) /wrapper.cgi?file=$1 [nc,l,qsa]


All the examples were using PHP. My server is old (as mentioned above) and you can't really find RPM's for older Fedora's. So, since I don't have PHP installed, I looked at my options ... and it immediately struck me that Python would be up to the task.

So, you need to create a Python scripted named
wrapper.cgi
which contains:

import os
import os
import re
import urllib

print "Content-type: text/html"
print
docroot = os.getenv( 'DOCUMENT_ROOT' )
fname = docroot + urllib.unquote( os.getenv( 'REQUEST_URI' ) )
buff = open( fname ).read( )

mobj = re.compile( '<body[^>]*>', re.IGNORECASE | re.VERBOSE )
mobj2 = re.compile( '</body>', re.IGNORECASE | re.VERBOSE )
obj = mobj.search( buff )
obj2 = mobj2.search( buff )
header = buff[:obj.end()]
body = buff[obj.start( ):obj2.start( )]
footer = buff[obj2.start():]

print header
if os.path.exists( docroot + '/header.inc' ):
print open( docroot + '/header.inc' ).read( )
print body
if os.path.exists( docroot + '/footer.inc' ):
print open( docroot + '/footer.inc' ).read( )
print footer


So, now I have a framework and method for wrapping all the assorted web pages with some common header & footer code (such as some essential support links, for example).

Wednesday, September 3, 2008

Plone and the annoying ATAmazon...

So, I recently tried to make more use out of my Amazon Associate Id that I had gotten way back in 2001. Since I run Plone on a few of my sites, I tried to to install the ATAmazon product.

It didn't work.

Not only didn't it work, but it didn't even give me very helpful error messages to try to fix the problem myself. Other people seem to use it ... or at least know about it. I enter in the proper ASIN number (I know this, because you can click on the "Buy" button that it shows and go to the proper item's page), but none of the details seem to either retrieved or parsed properly.

Any tips or clues to help resolve this would be most welcome.

(EDIT: Silly amazon "deprecated" that version of their web services. Unlike the usual method of deprecation, they simply turned it off!)

Monday, September 1, 2008

Stackless Python meets Twisted Matrix....




Sometimes, you come across two programming toolkits that would go great together. However, in the case of Twisted Matrix and Stackless python, there's some legwork required to get these two great systems to work together.
Twisted requires that it's reactor runs in the main "tasklet", but if there is no network activity or other deferred code to execute, the reactor loop will stop the entire application and thus defeat the purpose behind using tasklets and Stackless.

There is some setup required to get this all working together.

import stackless
from twisted.internet import reactor, task
reactor_tasklet = None

def reactor_run( ):
reactor_tasklet = stackless.getcurrent( )
# repeatedly call stackless.schedule every 0.0001 seconds
schedulingTask = task.LoopingCall( stackless.schedule )
# this prevents the reactor from blocking out the other tasklets
schedulingTask.start( 0.0001 )
reactor.run( )

t = stackless.tasklet( reactor_run )
t.run( )
# run the stackless scheduler.
stackless.run( )

Now, extending out this simple case to a more general solution involves the use of Python's function decorators. (I use the great decorator.py module to make decorators a little easier to write.)
def __filter( d ):
if isinstance( d, failure.Failure ):
if isinstance( d.value, TaskletExit ):
print "ignore taskletexit"
return None
return d
return d

def __wrapper( d, f, *args, **kwargs ):
try:
rv = defer.maybeDeferred( f, *args, **kwargs )
rv.addCallback( __filter )
rv.addCallback( d.callback )
rv.addErrback( __filter )
except TaskletExit:
pass
except Exception, e:
print e, dir( e )
d.errback( e )


Above is just some boiler-plate code. __filter screens out the TaskletExit exception that gets sent to Tasklets; if this isn't done, the Twisted framework wraps it up in an instance of twisted.python.failure.Failure and you get "Unhandled error in Deferred" exceptions at the calling point. Since this is almost never what you want, it's easiest to just filter it out. Of course, in real code you'll remove the line reading 'print "ignore taskletexit"'.

__wrapper does the actual heavy lifting of the function call. It uses the maybeDeferred function to ensure that after the function call we are only dealing with Deferred's. __wrapper uses Twisted's usual callback mechanism to ensure that the Deferred that it received as a function paramater is called once the results of the actual function call is available. This parameter Deferred is essential for the function decorators described next to work.

reactor_tasklet = None

@decorator
def deferred_tasklet( f, *args, **kwargs ):
d = defer.Deferred( )
t = stackless.tasklet( __wrapper )
t( d, f, *args, **kwargs )
t.run( )
return d


@decorator
def blocking_tasklet( f, *args, **kwargs ):
f2 = deferred_tasklet( f )
d = f2( *args, **kwargs )
if reactor_tasklet != stackless.getcurrent( )
and stackless.getcurrent( ) != stackless.getmain( ):
return block_on( d )
raise RuntimeError( "Cannot block in reactor task" )


def block_on( d ):
chan = stackless.channel( )
d.addBoth( lambda x,y=chan: y.send( x ) )
return chan.receive( )

Here we have the two main function decorators deferred_tasklet and blocking_tasklet, as well as the utiliity function block_on. The first of these simply returns a Deferred, suspiciously the very same Deferred that it passes as a parameter to the __wrapper function; which, if you've been paying attention, will be triggered once the results of the wrapped-up function are available. All we're really doing here is creating a stackless.tasklet and running __wrapper in that new microthread.

blocking_tasklet goes one step beyond this, and takes the Deferred that we were passed earlier and converts it into a blocking function call. First, it does some sanity checks to ensure that it's not blocking in the same tasklet that Twisted's reactor is running in. Somewhere you need to store the value of stackless.getcurrent() when called from with the reactor's tasklet. We also need to make sure that our current tasklet is not the "main" Stackless tasklet; this should never happen, but I like to be safe at times.

The utility function block_on sets up a Stackless channel. It then adds a simple little lambda closure. This closure only send's it's parameter to the stackless channel. This closure is added to both the callback and errback chain of the Deferred that we're going to wait on. After this is all set up, we then call receive, which blocks this tasklet until the Deferred is finished and the callbacks/errback is fired off. At this point, we receive the return value of the original function through the channel and can return it as the return value of our function.

As long as we are not in the same tasklet as Twisted's reactor, we can use this block_on function to turn our other wise asynchronous code into a sequentially execute synchronous code. This can also be done using Twisted's inlineCallbacks decorator, but that turns the decorated function into a generator, which isn't always what we want.