The xml-to-json package

[Tags:library, mit, program]

This simple tool converts XMLs to json format, gaining readability while losing information such as comments, attribute ordering, and such. The main purpose is to convert legacy XML-based data into a format that can be imported into JSON databases such as CouchDB and MongoDB.

See for details and usage.

[Skip to Readme]


Versions,,, 1.0.0, 1.0.1, 2.0.0, 2.0.1
Dependencies aeson, base (==4.5.*), bytestring, containers, hxt, hxt-curl, hxt-expat, hxt-tagsoup, text, unordered-containers, vector [details]
License GPL-3
Copyright Copyright Noam Lewis, 2012
Author Noam Lewis
Category Web, XML
Home page
Bug tracker
Source repository head: git clone
Uploaded Wed Oct 31 03:23:18 UTC 2012 by NoamLewis
Distributions NixOS:2.0.1, Stackage:2.0.1
Downloads 1791 total (38 in the last 30 days)
0 []
Status Docs not available [build log]
All reported builds failed as of 2016-12-23 [all 7 reports]
Hackage Matrix CI


Maintainer's Corner

For package maintainers and hackage trustees

Readme for xml-to-json

Readme for xml-to-json-


Fast & easy command line tool for converting XML files to JSON.



The output is designed to be easy to store and process using JSON-based databases, such as mongoDB and CouchDB. In fact, the original motivation for xml-to-json was to store and query a large (~10GB) XML-based dataset, using an off-the-shelf scalable JSON database.

Currently the tool processes XMLs according to lossy rules designed to produce sensibly minimal output. If you need to convert without losing information at all consider something like the XSLT offered by the jsonml project. Unlike jsonml, this tool - xml-to-json - produces json output similar (but not identical) to the xml2json-xslt project.

Implementation Notes

xml-to-json is implemented in Haskell. Currently the implementation is minimal - for example, the core translation functionality is not exported as a library. If you want to use it as a library, open an issue on this project (or better yet - do it and submit a pull request).

As of this writing, xml-to-json uses hxt with the expat-based hxt-expat parser. The pure Haskell parsers for hxt all seem to have memory issues which hxt-expat doesn't.


Note for Windows users: Download pre-built exe here. I managed to compile using cygwin. Use the xml-to-json branch for cygwin if you're trying to build for windows. The main difference is lack of curl support.

To install the release version: Since xml-to-json is implemented in Haskell, "all you need to do" is install the latest Haskell platform for your system, and then run:

cabal update
cabal install xml-to-json

To install from source: Clone this repository locally, and then (assuming you have Haskell platform installed) run cabal install:

cd xml-to-json
cabal install


Basic usage

Just run the tool with the filename as a single argument, and direct the stdout to a file or a pipe:

xml-to-json myfile.xml > myfile.js


Use the --help option to see the full command line options.

Here's a (possibly outdated) snapshot of the --help output:

Usage: <program> [OPTION...] files...
  -h      --help              Show this help
  -t TAG  --tag-name=TAG      Start conversion with nodes named TAG (ignoring all parent nodes)
  -s      --skip-roots        Ignore the selected nodes, and start converting from their children
                              (can be combined with the 'start-tag' option to process only children of the matching nodes)
  -a      --as-array          Output the resulting objects in a top-level JSON array
  -m      --multiline         When using 'as-array' output, print each of top-level json object on a seperate line.
                              (If not using 'as-array', this option will be on regardless, and output is always line-seperated.)
          --no-collapse-text  Don't collapse elements that only contain text into a simple string property.
                              Instead, always emit '.value' properties for text nodes, even if an element contains only text.
                              (Output 'schema' will be more stable.)
          --no-ignore-nulls   Don't ignore nulls (and do output them) in the top level of output objects

Example output

Input file:

<?xml version="1.0"?>
  <Test Name="The First Test">
    <SomeText>Some simple text</SomeText>
    <Description Format="FooFormat">
Just a dummy
<!-- comment -->
Xml file.
  <Test Name="Second"/>

JSON output using default settings:

{"Tests":{"Test":[{"Name":"The First Test","SomeText":"Some simple text","Description":{"Format":"FooFormat","value":"Just a dummy\n\nXml file."}},{"Name":"Second"}]}}

Formatted for readability (not the actual output):

            "Name":"The First Test",
            "SomeText":"Some simple text",
               "value":"Just a dummy\n\nXml file."

Using the various options you can control various aspects of the output such as:

  • At which top-level nodes the conversion starts to work (-t)
  • Whether to wrap the output in a top-level JSON array,
  • Whether or not to collapse simple string elements, such as the <SomeText> element in the example, into a simple string property
  • And more.

Use the --help option to see a full list of options.


For large XML files, the speed on a core-i5 machine is about 2MB of xml / sec, with a 100MB XML file resulting in a 56MB json output. It took about 10 minutes to process 1GB of xml data. The main performance limit is memory - only one single-threaded process was running since every single large file (tens of megabytes) consumes a lot of memory - about 50 times the size of the file.

A few simple tests have shown this to be at least twice as fast as jsonml's xlst-based converter (however, the outputs are not similar, as stated above).

Currently the program processes files serially. If run in parallel on many small XML files (<5MB) the performance becomes cpu-bound and processing may be much faster, depending on the architecture. A simple test showed performance of about 5MB / sec (on the same core-i5).