现在的位置: 首页 > 综合 > 正文

Parsing a large JSON file efficiently and easily

2019年04月16日 ⁄ 综合 ⁄ 共 3632字 ⁄ 字号 评论关闭

When parsing a JSON file, or an XML file for that matter, you have two options. You can read the file entirely
in an in-memory data structure (a tree model), which allows for easy random access to all the data. Or you can process the file in a streaming manner. In this case, either the parser can be in control by pushing out events (as is the case with XML SAX parsers)
or the application can pull the events from the parser. The first has the advantage that it is easy to chain multiple processors but it is quite hard to implement, the second has the advantage that it is rather easy to program and that you can stop parsing
when you have what you need.

Recently I was working on a little import tool for Lily,
which would read a schema description and records from a JSON file and put them into Lily.

Since I did not want to spent hours on this, I thought it was best to go for the tree model, thus reading the entire JSON file into memory. Still, it seemed like the sort of tool which might be easily abused: generate a large JSON file, then use the tool to
import it into Lily. In this case, reading the file entirely into memory might be impossible.

So I started using Jackson‘s pull
API, but quickly changed my mind as this would be too much work. But then I looked a bit closer at Jackson’s API and found out that it is very easy to combine the streaming and tree-model parsing options: you can move through the file as a whole in a streaming
way, and then read individual objects into a tree structure.

As an example, let’s take the following input:

{ 
  "records": [ 
    {"field1": "outer", "field2": "thought"}, 
    {"field2": "thought", "field1": "outer"} 
  ] ,
  "special message": "hello, world!" 
}

For this simple example it would be better to use plain CSV, but just imagine the fields being sparse or the records having a more complex structure.

The following snippet illustrates how this file can be read using a combination of stream and tree-model parsing. Each individual record is read in a tree structure, but the file is never read in its entirety into memory, making it possible to process JSON
files gigabytes in size while using minimal memory.

import org.codehaus.jackson.map.*;
import org.codehaus.jackson.*;

import java.io.File;

public class ParseJsonSample {
  public static void main(String[] args) throws Exception {
    JsonFactory f = new MappingJsonFactory();
    JsonParser jp = f.createJsonParser(new File(args[0]));

    JsonToken current;

    current = jp.nextToken();
    if (current != JsonToken.START_OBJECT) {
      System.out.println("Error: root should be object: quiting.");
      return;
    }

    while (jp.nextToken() != JsonToken.END_OBJECT) {
      String fieldName = jp.getCurrentName();
      // move from field name to field value
      current = jp.nextToken();
      if (fieldName.equals("records")) {
        if (current == JsonToken.START_ARRAY) {
          // For each of the records in the array
          while (jp.nextToken() != JsonToken.END_ARRAY) {
            // read the record into a tree model,
            // this moves the parsing position to the end of it
            JsonNode node = jp.readValueAsTree();
            // And now we have random access to everything in the object
            System.out.println("field1: " + node.get("field1").getValueAsText());
            System.out.println("field2: " + node.get("field2").getValueAsText());
          }
        } else {
          System.out.println("Error: records should be an array: skipping.");
          jp.skipChildren();
        }
      } else {
        System.out.println("Unprocessed property: " + fieldName);
        jp.skipChildren();
      }
    }                
  }
}

As you can guess, the nextToken() call each time gives the next parsing event: start object, start field, start array, start object, …, end object, …, end array, …

The jp.readValueAsTree() call allows to read what is at the current parsing position, a JSON object or array, into Jackson’s generic JSON tree model. Once you have this, you can access the data randomly, regardless of the order in which things appear in the
file (in the example field1 and field2 are not always in the same order). Jackson supports mapping onto your own Java objects too. The jp.skipChildren() is convenient: it allows to skip over a complete object tree or an array without having to run yourself
over all the events contained in it.

Once again, this illustrates the great value there is in the open source libraries out there.

Reference:

抱歉!评论已关闭.