Saturday, June 24, 2006

Week 5

Well this is week 5 and I'm currently working on thread debugging (I changed the order in my proposal but the times are still the same). Committed some code this morning to allow a user to call mpdb.py with a switch specifying either a target to connect to or a pdbserver instance to start. Also got the code working to set up thread debugging, i.e. setting up our tracer object for each thread that follows the threads execution. When I'm eventually done you'll be able to set break points and inspect frames, all the usual cool shizzle.

Although I did make a horrible discovery. It seems that the unit tests I had written for the mpdb module were passing purely by 'coincidence' and shouldn't have been passing at all. It was a problem with the way I'd written the code to set up a pdbserver and connect a client debugger to it. It's actually still not fixed, as I don't have time tonight. But I'll trying to fix it tomorrow.

Also checkout the README.txt sandbox/trunk/pdb for things left 'todo'.

Monday, June 19, 2006

TAO thread debugging

For the past few days I've been sifting through Python's thread handling code trying to figure out how I'm going to facilitate the debugging of threads in mpdb. Now, the Python interpreter provides no way for Python code to switch between threads, nor does it provide a way to suspend, stop, resume, interrupt any thread. So how, you may ask, are we going to enable thread debugging in mpdb? The answer is simply, 'I haven't decided yet'. I'm still running through some possible solutions, which I'll discuss now.

1] This is the least desirable solution. Code a Python module in C to look at the state of each of the threads and try to get some more control over it.

2] Have a main MPdb object that creates, initially sets the settrace method for all threads that are created from the threading module to some helper function inside mpdb that creates a new tracer object and then, from within the thread reassigns the settrace method to that objects trace method. Sound confusing? It kinda is. The main is that, because these tracer objects are under our control (i.e. the debugger's) we can handle all the synchronisation of how the main MPdb object wants to receive data from the tracer object.

3] Similar as above except using tracer threads instead of objects. This is how it's done in rpdb2 (Nir Aides debugger) with positive results.

4] Make calls to mpdb.settrace() inside thread code which sets the tracer function to one defined in the mpdb module. Whilst this would work for both threads created with thread.start_new_thread() and the threading module, it's not all happy-days because the programmer has to modify their buggy code in order to monitor the threads.

At the moment I'm leaning in the direction of #2, but we should be have confirmation by the end of the week as to exactly which design is going to appear in mpdb.

Monday, June 12, 2006

Checking in..

Well, I haven't posted here for quite a few days. But I've started making some progress with the
code in the tree. There's now a pdbserver command analogous to the gdbserver command in gdb, and a target command to connect to a pdbserver. Here's some docstrings,

target,

""" Connect to a target machine or process.
The first argument is the type or protocol of the target machine
(which can be the name of a class that is avaible either in the current
working directory or in Python's PYTHONPATH environtment variable).
Remaining arguments are interpreted by the target protocol. For more
information on the arguments for a particular protocol, type
`help target ' followed by the protocol name.

List of target subcommands:

target serial -- Use a remote computer via a serial line
target tcp -- Use a remote computer via a socket connection
"""


So, from a MPdb prompt you can do

`target tcp localhost:8000'

which connects to a pdbserver running on port 8000 on localhost.

pdbserver,


""" Allow a debugger to connect to this session.
The first argument is the type or protocol that is used for this connection
(which can be the name of a class that is avaible either in the current
working directory or in Python's PYTHONPATH environtment variable).
The next argument is protocol specific arguments (e.g. hostname and
port number for a TCP connection, or a serial device for a serial line
connection). The next argument is the filename of the script that is
being debugged. The rest of the arguments are passed as arguments to the
script file and are optional. For more information on the arguments for a
particular protocol, type `help pdbserver ' followed by the protocol name.
The syntax for this command is,

`pdbserver ConnectionClass comm scriptfile [args ...]'

"""


And we can do

`pdbserver tcp localhost:8000 broken_script.py'


One thing that you can do with this is instead of naming 'tcp' as the protocol, you can name a class as the protocol, for example, SocketServer.TCPServer (not that this would actually work, but you get the idea). More importantly, there's an abstract interface that connection classes must implement in order to be used by pdbserver. This class is called MServerConnectionInterface so somebody could derive from this class in a class named MyNewConnectionClass and (as long as this class was in the current directory of in PYTHONPATH environment variable) if this class was ina file named connclasses.py a user could do

`pdbserver connclasses.MyNewConnectionClass localhost:8000 broken_script.py'

I've found this pretty useful.

Right, bedtime.

Tuesday, June 06, 2006

Pydb, unit tests and file descriptors

Unfortunately, the first topic of this post is an annoying one. I couldn't access www.blogger.com
_again_ the other day, and if this happens I'm going to move my blog to someone who can provide
a half-decent service.

Right, anyway, there's been a few developments over the past few days. I've started to write up
some unit tests mainly to test the connection classes to make sure that they work by themselves.
This is a key idea that the connection classes that we've provided should be 'generic' to the
extent that they can be used for other projects.

Also, I'm now using pydb on top of pdb. Pydb is the Extended Python Debugger and is written by
Rocky Bernstein. It provides some excellent enhancements to pdb's features and whilst my project
aims to do the same, it does so with different goals in mind.

I've also come across another design decision. How do I get the debugger client to send its
commands to the server? I could override some of the pydb methods so that all the commands typed
on the stdin of the client are sent to the server, but what if some of the commands being typed
should be run on client-side? This is something I'm gonna have to think about tomorrow.

Anyway, time for bed for me..

Saturday, June 03, 2006

Day 8

I just posted a link to the pyxides group regarding the status of my project. In the post I asked two questions to the pyxides community..

1. Whether they thought that subclassing two classes, one the specified the interface of a MPdb target and one that specified the protocol used for communication, was a good design. In this design the target abstract class, which we'll call MTargetInterface looks something like that

class MTargetInterface(object):
def accept(self, client):
""" Accept a connection from a debugging console. """
raise NotImplementedError, "Override this method in a subclass"

def disconnect(self):
""" Close a connection to a debugging console. """
raise NotImplementedError

def listen(self):
""" Listen for incoming connections from debugging consoles. """
raise NotImplementedError


And this is what I meant about the 'interface' for a target. This is just an abstract class that provides an interface that a subclass must implement. How these classes are implemented for different protocols is of no concern to the debugger core. So, an implementation of a target may be something along the lines of


from SocketServer import TCPServer

class MTCPTarget(MTargetInterface, TCPServer):
""" Allow incoming connections from debugging consoles using the TCP protocol. """
def __init__(self, addr):
MTargetInterface.__init__(self)
TCPServer.__init__(self, addr, TCPRequestHandler)

def accept(self, client):
""" Accept incoming connections from a debugging console. """
# This is just a TCPServer method, although the method signature is wrong I think
self.handle_request(client)

def listen(self):
""" Listen for incoming connections """
# This would probably be handled by TCPServer anyway

def disconnect(self):
""" Close all connections. """
# Here we would close all connections


I've left out the real _meat_ of this class because the other stuff would just be getting TCPServer to properly.


2. What techniques are currently being used to keep the source file that is local to a GUI front-end for a debugger and the source code the debugger is working on. Front-ends usually keep a 'local' copy of the source code so that they can perform their own parsing and whatnot, and I was curious to find out from the community how exactly they do this.

Well that's all for this time, be sure to check the pyxides group for more information and replies form people.

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]