question

thaison avatar image
thaison asked thaison commented

What is the optimal way of fetching data that traverses 10K+ edges and vertices?

Hi,

I would like to traverse all out edges and collect data along the path (from edge and vertex). It works fine for small number of edges but it runs into time outs when the edge count is high (10K).

The traversal is something like:

g.V(id)//single source vertex
    .outE("uses")//10K edges
    .project("inVPropA","inVPropB","edgePropA","edgePropB","edgePropC","edgePropD")
    .by(__.inV().values("A")).by(__.inV().values("B"))
    .by(values("A")).by(values("B")).by(values("C")).by(values("D"))

Does anyone know of a faster traversal to get the same result or any suggestions?

graph
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

jeromatron avatar image
jeromatron answered thaison commented

Without changing the traversal, you should be able to stream results. That would at least avoid a timeout. You can stream results with both the Apache TinkerPop drivers to gremlin server with DSE Graph or with 6.8's core graph you can stream results with the DataStax specific driver functionality. Would that be what you're looking for?

1 comment Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

thaison avatar image thaison commented ·

Thanks @jeromatron . This is how we are executing and getting the results. Is this what you mean by streaming the results or is there another way?

statement = FluentGraphStatement.newInstance(traversal);
CqlSession cqlSession = dseSession.getSession();
GraphResultSet resultSet = cqlSession.execute(statement);
if(resultSet.iterator().hasNext())  {
    resultSet.forEach((node) -> {
        String label = node.getByKey(T.label) != null ? node.getByKey(T.label).asString() : null;
        //add the result node
        results.accept(new ResultNode(label, node, true, resultSet.iterator().hasNext()));
    });
} 
0 Likes 0 ·