I'm writing a simplified wrapper class in Python for an AWS module (Boto, specifically).  Several times in this process I've used @property to avoid special "getter" and "setter" methods in my library - I'm told that this is the more pythonic way to do it.  When using the class, the programmer call the methods as if they were simple objects, like this:
myclass.myprop = 5         # sends "5" to myprop's setter function
result = myclass.myprop    # calls myprop's getter function and stores the result
But I'm also dealing with several sets of objects - name/value pairs of tags, for example - that I would like to access as if they were held in a container, possibly a dictionary or a list. Taking the tag example:
myclass.tags["newkey"] = "newvalue"   # runs a function that applies tag in AWS
result = myclass.tags["newkey"]       # accesses AWS to get value of "newkey" tag
From what I'm seeing, it looks like it would be possible to do this by subclassing dict, but I feel like I'm missing something here.  What is the most pythonic way to create an interface like this?
EDIT: I ended up using Silas Ray's solution, but modified it so that the classes can be used to define multiple dict-like objects. It's not exactly clean, but I'm going to post my modified code and an explanation here to help anyone else having trouble grokking this.
class FakeDict(object):
    def __init__(self, obj, getter, setter, remover, lister):
        self.obj = obj
        self.getter = getter
        self.setter = setter
        self.lister = lister
        self.remover = remover
    def __getitem__(self, key):
        return self.getter(self.obj, key)
    def __setitem__(self, key, value):
        self.setter(self.obj, key, value)
    def __delitem__(self, key):
        self.remover(self.obj, key)
    def _set(self, new_dict):
        for key in self.lister(self.obj):
            if key not in new_dict:
                self.remover(self.obj, key)
        for key, value in new_dict.iteritems():
            self.setter(self.obj, key, value)
class ProxyDescriptor(object):
    def __init__(self, name, klass, getter, setter, remover, lister):
        self.name = name
        self.proxied_class = klass
        self.getter = getter
        self.setter = setter
        self.remover = remover
        self.lister = lister
    def __get__(self, obj, klass):
        if not hasattr(obj, self.name):
            setattr(obj, self.name, self.proxied_class(obj, self.getter, self.setter, self.remover, self.lister))
        return getattr(obj, self.name)
    def __set__(self, obj, value):
        self.__get__(obj, obj.__class__)._set(value)
class AWS(object):
    def get_tag(self, tag):
        print "Ran get tag"
        return "fgsfds"
        # Call to AWS to get tag
    def set_tag(self, tag, value):
        print "Ran set tag"
        # Call to AWS to set tag
    def remove_tag(self, tag):
        print "Ran remove tag"
        # Call to AWS to remove tag
    def tag_list(self):
        print "Ran list tags"
        # Call to AWS to retrieve all tags
    def get_foo(self, foo):
        print "Ran get foo"
        return "fgsfds"
        # Call to AWS to get tag
    def set_foo(self, foo, value):
        print "Ran set foo"
        # Call to AWS to set tag
    def remove_foo(self, tag):
        print "Ran remove foo"
        # Call to AWS to remove tag
    def foo_list(self):
        print "Ran list foo"
        # Call to AWS to retrieve all tags
    tags = ProxyDescriptor('_tags', FakeDict, get_tag, set_tag, remove_tag, tag_list)
    foos = ProxyDescriptor('_foos', FakeDict, get_foo, set_foo, remove_foo, foo_list)
test = AWS()
tagvalue = test.tags["tag1"]
print tagvalue
test.tags["tag1"] = "value1"
del test.tags["tag1"]
foovalue = test.foos["foo1"]
print foovalue
test.foos["foo1"] = "value1"
del test.foos["foo1"]
Now for the explanation.
tags and foos are both class-level instances of ProxyDescriptor, and are instantiated only once when the class is defined.  They've been moved to the bottom so they can reference the function definitions above them, which are used to define the behavior for the various dictionary actions.
Most of the "magic" happens on the __get__ method for ProxyDescriptor.  Any code with test.tags will run the __get__ method of the descriptor, which simply checks if test (passed in as obj) has an attribute named _tags yet.  If it doesn't, it creates one - an instance of the class that was passed to it before.  This is where FakeDict's constructor is called.  it ends up being called and created exactly once for every instance of AWS where tags is referenced.
We've passed the set of four functions through the descriptor and through FakeDict's constructor - but using them inside FakeDict is a little tricky because the context has changed.  If we use the functions directly inside an instance of the AWS class (as in test.get_tag), Python automatically fills the self argument with the owner test.  But they're not being called from test - when we passed them to the descriptor, we passed the class-level functions, which have no self to reference.  To get around this, we treat self as a traditional argument.  obj in FakeDict actually represents our test object - so we can just pass it in as the first argument to the function.
Part of what makes this so confusing is that there's lots of weird circular references between AWS, ProxyDescriptor, and FakeDict.  if you're having trouble understanding it, keep in mind that in both 'ProxyDescriptor' and 'FakeDict', obj is an instance of the AWS class that has been passed to them, even though the instance of FakeDict lives inside that same instance of the AWS class.
 
     
     
    