[Your checkpoint writeup goes here.]
We used undirected graph data as our dataset. Stanford Network Analysis Project(SNAP) gives us an abundance of dataset. We basically used three dataset, one is smallest with few lines for debugging, one is Facebook dataset from SNAP and LiveJournal from SNAP. We use LiveJournal as our final experiment dataset, which includes 3,997,962 nodes and 177,820,130 edges
The dataset from SNAP contains complete information, including the number of nodes, edges, cluster coefficient, and most important part for us, the number of triangles. Thus, we can easily examine our result to the real answer.
We have explored triangle counting problem to the very detail. One of branch in this problem is to find the approximate number of triangles. However, though it can achieve much higher speedup, we found it hard to make comparison between each other since the different degree of approximation affected the running complexity, also it's hard to choose one to implement parallelized code. We then decided to do on exact triangle counting.
In Thomas Schank's phd dissertation, he clearly explained the existing algorithms on this problem. The method for triangle counting can be categorized into five types: try-all, matrix-multiplication, tree-lister, node-iterator, edge-iterator. We plan to optimization based on these methods using all tools that we can use.On the other hand, there is no good parallel solution until 2011, when Yahoo researchers Suri and Vassilvitskii published a solution on Hadoop which avoid the curse of last reducer.
We began to implement those algorithms at our starting point. Serialized algorithm is a good metric for we later compare the performance with GPU parallelized code. On the other hand, hadoop solution gives us a way to compare different parallel approaches.All nodes combinations
Simple run a three layer for loop through every nodes.Node Iterator
For every node, check all pairs of it's neighbors, if there's an edge between two neighbors, count as one triangle.Edge Iterator
For every node, check all it's neighbors, if there's a node linking to him and his neighbors, count as one triangle.Hadoop Node Iterator
Same as node iterator, but suffer from the curse of last reduce.Hadoop Partition
Partition graph to many subgraphs. Compute the triangles in all combination of union of three subgraphs. This algorithm will increase the total computations but with same time complexity. It then re-weighted the triangle to avoid those multi-counted triangles.
|Algorithm||Performance on LiveJournal dataset(seconds)||Comments|
|All nodes combinations||Many hours||Didn't record the exact number.|
|Hadoop Node Iterator||Not yet complete||Only runnable on local machine. Not yet be able to run on AWS.|
|Hadoop Partition||Not yet complete||Only runnable on local machine. Not yet be able to run on AWS.|
For implementation on GPU, we still believe we can achieve 10X times speedup that serialized algorithm. On the other hand, we would like to surpass Hadoop's solution in terms of speed.
We will use graphs to show our speedup on this problem.
We still need to focus on how to optimize the algorithms based on parallelism environment. We should try some methods in the paper and improve it. Also, one of problem we met is how to compare GPU and CPU programs since we definitely set same number of cores between GPU and CPU. Now we are thinking using same price machine on Amazon, which can give us faster result.