Showing posts with label stackless. Show all posts
Showing posts with label stackless. Show all posts
Friday, October 3, 2008
EVE Online's Stackless Architecture
I just saw this article about EVE Online's Server Model. Very interesting and informative read, and a testament to Stackless's reliability and usability.
Thursday, October 2, 2008
Perspective Broker Authentication for Stackless Python
Twisted Matrix's Perspective Broker is, to me at least, the main compelling reason to use the entire framework. It's also an area that is lacking in documentation that is usable to a newbie to the framework.
One of the things provided by the Perspective Broker is an authentication framework, based around the concept of an avatar. We're going to create a Perspective Broker authenticator that is more complex than what is presented in the Twisted Matrix documentation.
In one of the projects I worked on, we needed to have a central authentication server, to which all the other object brokers would authenticate against. Due to the way that Twisted operates, a user needs to authenticate against the broker that they are currently connected to. We need to build a authenticator that can authenticate for itself, or if needed, recursively authenticate against another server.
To achieve this goal, we run into the first hurdle, Twisted doesn't pass a meaningful password around. This changes the order of authentication slightly then, as we now need to authenticate against the main authenticating server first. That server then gives us a cookie, known as authentication_cookie in the code below. This cookie is good for this login session with the main authenticator. When connecting and authenticating against any other broker in the system, we pass the username and this cookie as our credentials.
We use one of the function decorators from when we first started integrating Twisted into Stackless python, the deferred_tasklet. If you haven't read that article, the deferred_tasklet decorator wraps the function to run in a seperate Stackless tasklet, while still returning a Deferred that can be used by the rest of the Twisted machinery.
The referenced member login_service is a straight-forward object that implements the function authenticate_account. If the credentials it is passed are good, it returns an object -- this object is the avatar required by the Perspective Broker. If the credentials are bad, it returns None, which then gets translated into an appropriate remote exception.
The core ingredient in this is whether or not UsernamePasswordDatabaseChecker's member login_service has a member method callRemote. In this scenario, we make remote calls to do the authentication. We use another one of function decorators, blocking_tasklet, to do this so that our code can remain synchronous in style.
All in all, using Stackless tasklets to implement this setup results in less lines of code, and code that is straight-forward and easy to understand. The purely Twisted incarnation of this setup result in a crazy amount of nested callbacks that were very difficult to follow during initial testing, let alone a few months later.
The examples for login_service can be provided, if anyone wants, I just need to dig through the CVS repository to find them.
One of the things provided by the Perspective Broker is an authentication framework, based around the concept of an avatar. We're going to create a Perspective Broker authenticator that is more complex than what is presented in the Twisted Matrix documentation.
In one of the projects I worked on, we needed to have a central authentication server, to which all the other object brokers would authenticate against. Due to the way that Twisted operates, a user needs to authenticate against the broker that they are currently connected to. We need to build a authenticator that can authenticate for itself, or if needed, recursively authenticate against another server.
To achieve this goal, we run into the first hurdle, Twisted doesn't pass a meaningful password around. This changes the order of authentication slightly then, as we now need to authenticate against the main authenticating server first. That server then gives us a cookie, known as authentication_cookie in the code below. This cookie is good for this login session with the main authenticator. When connecting and authenticating against any other broker in the system, we pass the username and this cookie as our credentials.
class UsernamePasswordDatabaseChecker:
interface.implements(checkers.ICredentialsChecker)
credentialInterfaces = (credentials.IUsernamePassword,
credentials.IUsernameHashedPassword)
def __init__( self, login_service ):
self.login_service = login_service
def passcheck( self, account, credentials ):
if not account:
return failure.Failure( NoSuchAccount( ) )
c1 = credentials.checkPassword( account.password )
c2 = credentials.checkPassword( account.authentication_cookie )
if c1:
return login_server.MasterAuthenticationStatus( account.username,
account, 1 )
elif c2:
return login_server.MasterAuthenticationStatus( account.username,
account, 0 )
elif not c1 and not c2 and account.authentication_cookie:
return failure.Failure( AuthenticationCookieMismatch( ) )
elif not c1 and not c2 and not account.authentication_cookie:
return failure.Failure( InvalidPassword( ) )
return failure.Failure( NoSuchAccount( ) )
@reactor.deferred_tasklet
def store_auth_cookie( self, auth_status ):
bcr = reactor.blocking_tasklet( self.login_service.callRemote )
return bcr( "store_authentication_cookie", auth_status )
@reactor.deferred_tasklet
def requestAvatarId(self, credentials ):
if hasattr( self.login_service, "callRemote" ):
bcr = reactor.blocking_tasklet( self.login_server.callRemote )
acct = bcr( "get_account", credentials.username )
self.passcheck( acct, credentials )
status = bcr( "account_status", acct )
rv = self.store_auth_cookie( status )
return rv
else:
rv = self.login_service.authenticate_account( credentials )
return rv
We use one of the function decorators from when we first started integrating Twisted into Stackless python, the deferred_tasklet. If you haven't read that article, the deferred_tasklet decorator wraps the function to run in a seperate Stackless tasklet, while still returning a Deferred that can be used by the rest of the Twisted machinery.
The referenced member login_service is a straight-forward object that implements the function authenticate_account. If the credentials it is passed are good, it returns an object -- this object is the avatar required by the Perspective Broker. If the credentials are bad, it returns None, which then gets translated into an appropriate remote exception.
The core ingredient in this is whether or not UsernamePasswordDatabaseChecker's member login_service has a member method callRemote. In this scenario, we make remote calls to do the authentication. We use another one of function decorators, blocking_tasklet, to do this so that our code can remain synchronous in style.
All in all, using Stackless tasklets to implement this setup results in less lines of code, and code that is straight-forward and easy to understand. The purely Twisted incarnation of this setup result in a crazy amount of nested callbacks that were very difficult to follow during initial testing, let alone a few months later.
The examples for login_service can be provided, if anyone wants, I just need to dig through the CVS repository to find them.
Labels:
programming,
python,
stackless,
twisted,
twisted integration
Thursday, September 25, 2008
Event-based Programming for Python
Oftentimes, you need to have objects that communicate with each other via events. This is a very useful setup, for example, in a GUI -- where these events represent things like mouse clicks, key strokes, or button presses. That's not what I developed these classes for, since I was more interested in simulating things and the event system seemed like the most natural fit, but the recipe is still relevant to other event handling needs.
We're going to build upon earlier discussions, most notably about Stackless event loops by adding in some concrete examples of using that recipe.
The Observer pattern strikes again, as I'm defining the relationship between the event generator and the event listener as one of Observable and Observer. We'll make use of the channel_processor function decorator described in Stackless event loops.
class EventObserver( object ):
def __init__( self, *args, **kwargs ):
super( EventObserver, self ).__init__( *args, **kwargs )
@channel_processor
def event_transieve( self, *args, **kwargs ):
evt = kwargs.get( 'data', None )
self.process_event( evt )
def process_event( self, event ):
pass
def event_notify( self, event ):
self.event_transieve( event )
This is straight-forward enough. The only trickery (if you could call it that) is in the event_transieve method. And all that does is take whatever is passed as the keyword argument data and call the method process_event. In this base class implementation, that function does nothing.
One bit of niftiness does occur, however, when the event_transieve method is invoked. Through the use of function decorators (and therefore transparent to the calling client) this method actually spans across tasklets, granting some semblance of concurrency.
Now, we can safely ignore the attach_event_observer and methods -- they only exist to implement the Observer pattern. The only method we really care about at the moment is dispatcher.
class EventObservable( object ):
def __init__( self, *args, **kwargs ):
super( EventObservable, self ).__init__( *args, **kwargs )
self.__event_observers = weakref.WeakKeyDictionary( )
self.__events = [ ]
def attach_event_observer( self, obs, level=1 ):
if obs not in self.__event_observers:
self.__event_observers[obs] = level
return self
def detach_event_observer( self, obs ):
if obs in self.__event_observers:
del self.__event_observers[obs]
return self
@channel_processor
def dispatcher( self, *args, **kwargs ):
data = kwargs.get( 'data', None )
wlist = []
for key in self.__event_observers.keys( ):
val = self.__event_observers[key]
seok = SortableEventObserverKey( key, val )
heapq.heappush( wlist, seok )
while len( wlist ):
obs = heapq.heappop( wlist )
obs( evt )
def dispatch_event( self, event ):
self.dispatcher( event )
return event
In this method we simply loop over all the currently registered observers, invoking (ultimately) their event_notify method. If you don't see how that happens, just be patient and wait until we look at the SortableEventObserverKey helper class and it's definition of the __call__ method.
class SortableEventObserverKey( object ):
def __init__( self, kval, weight, *args, **kwargs ):
super( SortableEventObserverKey, self ).__init__( *args, **kwargs )
self.__value = kval
self.__weight = weight
def __cmp__( self, other ):
return cmp( self.__weight, other.__weight )
def __call__( self, event ):
return self.__value.event_notify( event )
def __repr__( self ):
return "%s, %s" % ( self.__value, self.__weight )
Now, I hate that I had to throw something like that into the discussion. The helper class only exists to make the comparison functions easier when using the heap queue. For anyone unfamiliar with heaps, a heap ensures that the highest weighted object is at the front of the queue and will be the first one taken out of the structure.
class EventParticipant( EventObservable, EventObserver ):
def __init__( self, *args, **kwargs ):
super( EventParticipant, self ).__init__( *args, **kwargs )
event_manager = kwargs.get( "event_manager", "events" )
self.event_manager = pytypes.get_event_manager( event_manager )
def generate_event( self, event_type, *data, **kwargs ):
evt = self.event_manager.create_event( self, event_type, *data, **kwargs )
return self.dispatch_event( evt )
Here's the easy class to implement. It defines the EventParticipant, which is both the Observable and the Observer. This is, utlimately, the class that I extend for my simulations since my program domain requires for the objects to both generate events and be interested in other object's events. Simply extending from this class gives you that ability in a nice, clean, and concurrent fashion (or, at least as concurrent as Stackless gets you).
Tuesday, September 23, 2008
Stackless Event Loops
This is a follow-on article to Stackless Python Meets Twisted Matrix. This time how to use function decorators to turn a ordinary looking function in a looping event dispatcher. Useful for the Observer design pattern.
Through the syntactical power of decorators, one can can convert any function into a continuously run event loop. This utilizes Stackless Python, which has been discussed in earlier articles on not only this web site, but many many others as well.
The premise behind this event loop is this: a tasklet runs and dispatches incoming "events" to a handler function. To the outside caller, it appears to be a regular function call, but the mechanisms provided by the decorator allow the execution of the "event handler" to be run in a seperate tasklet. If desired, this premise can be extended further to allow for the event loop to run in its own thread.
First, let's look at the class that does all the heavy lifting.
class ChannelProcessor:This code makes use of the decorators described in an earlier article, available here. As you will notice, the core of the event loop is contained in the process function, which runs in it's own tasklet (due to being decorated by the deferred_tasklet decorator). It doesn't matter if you aren't using Twisted for this code to work, although you will need Twisted installed for it to run (unless you change the mechanices of deferred_tasklet).
def __init__( self, action ):
self.channel = stackless.channel( )
self.action = action
self.running = True
self.process( )
def stop( self ):
self.running = False
self.channel.send( 1 )
@blocking_tasklet
def __call__( self, *args, **kwargs ):
c = stackless.channel( )
self.channel.send( (c,args,kwargs) )
rv = c.receive( )
if isinstance( rv, failure.Failure ):
raise rv.value
return rv
@deferred_tasklet
def process( self ):
while self.running:
vals = self.channel.receive( )
if len( vals ) == 3:
c,args,kwargs = vals
d = defer.Deferred( )
d.addBoth( c.send )
_wrapper( d, self.action, *args, **kwargs )
else:
self.running = False
process simply loops until told otherwise (via the stop method), receiving data from it's channel. If the data is a tuple of 3 items, it calls the original function (stored in the action member). Return value from the event handler is sent on a channel, which we received as the 1st element of the tuple.
An outside caller enters into this mechanism via the __call__ method. This method creates a new channel, and then passes that channel, and the parameters it was called with along the object's channel. It then waits for data to be sent back to it. After a quick look at the data returned, it either returns the data through the traditional means or raises an exception (if it received an exception).
Now, for the decorator:
@decorator
def channel_processor( f, *args, **kwargs ):
func_obj = None
if type( f ) == types.MethodType:
target = f.im_self
target_name = f.func_name + '_cp'
if not hasattr( target, target_name ):
func_obj = ChannelProcessor( f )
setattr( target, target_name, func_obj )
else:
func_obj = getattr( target, target_name )
elif not hasattr( f, "_cp" ):
setattr( f, '_cp', ChannelProcessor( f ) )
func_obj = f._cp
else:
func_obj = f._cp
return func_obj( *args, **kwargs )
Here, we create a ChannelProcessor object and stuff it into the calling function's member list (as a member named _cp). If it already exists, great. In any case, we then call the object, which will lead us into the __call__ method shown above.
A special case is made if we are an object's method, instead of a regular function. This does not happen in the regular use-case of a decorator (when using the @decorator_name syntax). It only happens when we do something like:
class A:You use this method if you need to have each object having it's own tasklet that handles events. Using the standard decorator syntax results in each function having it's own event handling tasklet.
def __init__( self ):
self.func = channel_processor( func )
def func( self, *args ):
print "Here i am with arg list:", args
I tend to use the per-function methodology, but as usual, your mileage may vary.
Labels:
programming,
python,
stackless,
twisted integration
Monday, September 1, 2008
Stackless Python meets Twisted Matrix....
Sometimes, you come across two programming toolkits that would go great together. However, in the case of Twisted Matrix and Stackless python, there's some legwork required to get these two great systems to work together.
Twisted requires that it's reactor runs in the main "tasklet", but if there is no network activity or other deferred code to execute, the reactor loop will stop the entire application and thus defeat the purpose behind using tasklets and Stackless.
There is some setup required to get this all working together.
import stackless
from twisted.internet import reactor, task
reactor_tasklet = None
def reactor_run( ):
reactor_tasklet = stackless.getcurrent( )
# repeatedly call stackless.schedule every 0.0001 seconds
schedulingTask = task.LoopingCall( stackless.schedule )
# this prevents the reactor from blocking out the other tasklets
schedulingTask.start( 0.0001 )
reactor.run( )
t = stackless.tasklet( reactor_run )
t.run( )
# run the stackless scheduler.
stackless.run( )
Now, extending out this simple case to a more general solution involves the use of Python's function decorators. (I use the great decorator.py module to make decorators a little easier to write.)
def __filter( d ):Above is just some boiler-plate code. __filter screens out the TaskletExit exception that gets sent to Tasklets; if this isn't done, the Twisted framework wraps it up in an instance of twisted.python.failure.Failure and you get "Unhandled error in Deferred" exceptions at the calling point. Since this is almost never what you want, it's easiest to just filter it out. Of course, in real code you'll remove the line reading 'print "ignore taskletexit"'.
if isinstance( d, failure.Failure ):
if isinstance( d.value, TaskletExit ):
print "ignore taskletexit"
return None
return d
return d
def __wrapper( d, f, *args, **kwargs ):
try:
rv = defer.maybeDeferred( f, *args, **kwargs )
rv.addCallback( __filter )
rv.addCallback( d.callback )
rv.addErrback( __filter )
except TaskletExit:
pass
except Exception, e:
print e, dir( e )
d.errback( e )
__wrapper does the actual heavy lifting of the function call. It uses the maybeDeferred function to ensure that after the function call we are only dealing with Deferred's. __wrapper uses Twisted's usual callback mechanism to ensure that the Deferred that it received as a function paramater is called once the results of the actual function call is available. This parameter Deferred is essential for the function decorators described next to work.
reactor_tasklet = NoneHere we have the two main function decorators deferred_tasklet and blocking_tasklet, as well as the utiliity function block_on. The first of these simply returns a Deferred, suspiciously the very same Deferred that it passes as a parameter to the __wrapper function; which, if you've been paying attention, will be triggered once the results of the wrapped-up function are available. All we're really doing here is creating a stackless.tasklet and running __wrapper in that new microthread.
@decorator
def deferred_tasklet( f, *args, **kwargs ):
d = defer.Deferred( )
t = stackless.tasklet( __wrapper )
t( d, f, *args, **kwargs )
t.run( )
return d
@decorator
def blocking_tasklet( f, *args, **kwargs ):
f2 = deferred_tasklet( f )
d = f2( *args, **kwargs )
if reactor_tasklet != stackless.getcurrent( )
and stackless.getcurrent( ) != stackless.getmain( ):
return block_on( d )
raise RuntimeError( "Cannot block in reactor task" )
def block_on( d ):
chan = stackless.channel( )
d.addBoth( lambda x,y=chan: y.send( x ) )
return chan.receive( )
blocking_tasklet goes one step beyond this, and takes the Deferred that we were passed earlier and converts it into a blocking function call. First, it does some sanity checks to ensure that it's not blocking in the same tasklet that Twisted's reactor is running in. Somewhere you need to store the value of stackless.getcurrent() when called from with the reactor's tasklet. We also need to make sure that our current tasklet is not the "main" Stackless tasklet; this should never happen, but I like to be safe at times.
The utility function block_on sets up a Stackless channel. It then adds a simple little lambda closure. This closure only send's it's parameter to the stackless channel. This closure is added to both the callback and errback chain of the Deferred that we're going to wait on. After this is all set up, we then call receive, which blocks this tasklet until the Deferred is finished and the callbacks/errback is fired off. At this point, we receive the return value of the original function through the channel and can return it as the return value of our function.
As long as we are not in the same tasklet as Twisted's reactor, we can use this block_on function to turn our other wise asynchronous code into a sequentially execute synchronous code. This can also be done using Twisted's inlineCallbacks decorator, but that turns the decorated function into a generator, which isn't always what we want.
Labels:
programming,
python,
stackless,
twisted,
twisted integration
Subscribe to:
Posts (Atom)