I'm writing a convenience library and looking for best practice to load lambda functions from a text file.
- The library is designed to import one or more datasets (Facebook Insights, for example) in a known format and manipulate the data into a Pandas dataframe that can then be plotted either in an IPython notebook or a webpage.
- Each definition contains functions (for aggregation in
DataFrame.groupby()and alsolambdafunctions that are used withDataFrame.apply()) - I've currently hard-coded the rules to manipulate each file into a
dictbut I'd like to abstract these into a series ofjsonfiles so that I can more easily add definitions.
For the aggregation methods, the list is fairly short so I can easily make a list of if statements. However, by definition the apply lambda functions are bespoke for each definition. Here's an example which takes a couple of columns to derive a percentage:
lambda x: float(float(x[1]) / float(x[0])) * 100}
I'm aware of the eval method but this doesn't sound like good practice (as I'd one day like to open this up for others to use and eval is open to abuse). Similar is the jsonpickle library but this is also open to abuse in principle. The alternative would be fixed list of functions, but I don't see that this type of arbitrary function can be made into a fixed list.
Has anyone got similar experience and able to offer a best practice approach?