I'm developing an app where I have to parse a huge XML file (65 MB) with the following structure, in order to generate a PDF file from it using Jasper Reports:
<A>
    <a attribute1="" attribute2="" attribute3=""/>
</A>
<B>
    <b attribute1="" attribute2="" attribute3=""/>
</B>
<C>
    <c attribute1="" attribute2="" attribute3=""/>
</C>
<D>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    <d attribute1="" attribute2="" attribute3=""/>
    ...
</D>
... with a very huge amount of <d> tags (min 500 000 tags).
My problem is that these  tags are so huge that they are causing java.lang.OutOfMemoryError: Java heap space error.
I'm using this line to parse the file :
Document document = JRXmlUtils.parse(JRLoader.getLocationInputStream(xmlPath));
Does anyone have an alternative to using JRXmlUtils.parse method? To be able to avoid OutOfMemoryError error (without raising heap space) ?
Thank you
EDIT :
I've already seen this post concerning SAXParser but I don't know how to adapt it to my case since my XML structure is a little special (I have many data before my problematic tags)... Any clarification ?
 
    