Language :- Python 2.7.6
File Size :- 1.5 GB
XML Format
<myfeed>
    <product>
        <id>876543</id>
        <name>ABC</name>
        ....
     </product>
    <product>
        <id>876567</id>
        <name>DEF</name>
        ....
     </product>
    <product>
        <id>986543</id>
        <name>XYZ</name>
        ....
     </product>
I have to
A) Read all the nodes <product>
B) Delete some of these nodes ( if the <id> attribute's text is in python set()
C) Update/Alter few nodes ( if the <id> attribute's text is in python dict
D) Append/Write some new nodes
The problem is my XML file is huge ( approx 1.5 GB ). I did some research and decide to use lxml for all these purposes.
I am trying to use iterparse() with element.clear() to achieve this because it will not consume all my memory.
for event, element in etree.iterparse(big_xml_file,tag = 'product'):
        for child in element:
            if child.tag == unique_tag:
                if child.text in products_id_hash_set_to_delete: #python set()
                    #delete this element node
                else:
                    if child.text in products_dict_to_update:
                        #update this element node  
                        else:
                            print child.text
        element.clear()
Note:- I want to achieve all these 4 task in one scan of the XML file
Questions
1) Can I achieve all this in one scan of the file ?
2) If yes, how to delete and update the element nodes I am processing?
3) Should I use tree.xpath() instead ? If yes, how much memory will it consume for 1.5 GB file or does it works in same way as iterparse()
I am not very experienced in python. I am from Java background.
 
     
    