Apache Spark application that implements Minimax algorithm to play to the chess game

Apache Spark application wrote in Scala that implements Minimax algorithm to play to the chess game. The purpose of the application is to experiment with the building and use of a Minimax search tree over a Apache Spark cluster.

Continue reading “Apache Spark application that implements Minimax algorithm to play to the chess game”

Apache Spark application to calculate the relevance of each word from a list of phrases

The following Apache Spark application wrote in Scala calculates the relevance of each word from a list of phrases using a initial weight value for each phrase.
Continue reading “Apache Spark application to calculate the relevance of each word from a list of phrases”

Add and subtract days from a timestamp in Java / Scala

The following function adds the days indicated in the days argument (positive or negative integer value) to the timestamp passed through the date argument. The function expects the timestamp input format indicated in the inputFormat argument and returns the new timestamp resulting from the addition or subtraction with the format indicated in the outputFormat argument (format example: “yyyyMMdd”).

Continue reading “Add and subtract days from a timestamp in Java / Scala”

Reading and writing Amazon S3 files from Apache Spark

The S3 Native Filesystem client present in Apache Spark running over Apache Hadoop allows access to the Amazon S3 service from a Apache Spark application. So it is enough to define the S3 Access Key and the S3 Secret Access Key in the Spark Context as shown below:

Continue reading “Reading and writing Amazon S3 files from Apache Spark”