Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon
Labs

Loading and Retrieving Data in Neptune

In this lab, you will load data from an S3 bucket into an existing Neptune instance using the bulk load feature. This is far more efficient than executing a large number of `INSERT` statements, `addVertex`, and `addEdge` steps, or other API calls. The Neptune instance will be available when you start the lab. However, you will need to create an IAM role and an S3 bucket, so prior knowledge of the IAM and S3 services are suggested.

Google Cloud Platform icon
Labs

Path Info

Level
Clock icon Intermediate
Duration
Clock icon 1h 30m
Published
Clock icon Oct 09, 2020

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Create an S3 Bucket and Grant Access

    1. In the AWS Management Console, navigate to S3.
    2. Visit this lab's GitHub repo and download the neptune-data.rdf file to your local machine.
    3. Create a S3 bucket with the following parameters:
      • Bucket name: neptune-import<INSERT_CURRENT_DATE_HERE>
      • Region: US East (N. Virginia)
    4. Copy the S3 bucket name into a text file for later use.
    5. Upload the neptune-data.rdf from your local machine.
    6. Create an IAM Role called neptune-import with AmazonS3ReadOnlyAccess permissions.
    7. Edit trust relationship for neptune-import and add the rds.amazonaws.com service.
    8. Add the neptune-import role to your neptune-cluster.
  2. Challenge

    Load the Data

    1. In the AWS Management Console, navigate to VPC.
    2. Create an Endpoint for the S3 service using "com.amazonaws.us-east-1.s3" and use the Route Table ID with two subnets.
    3. Copy the neptune-import role ARN into a text file for later use.
    4. Copy the neptune-clusterendpoint and port number into a text file for later use.
    5. In your local terminal, connect to the lab instance using the provided lab credentials.
    6. Save the neptune-import endpoint URL as an environment variable.
    7. Use curl to submit the upload, adding your unique role ARN to iamRoleArn and unique bucket name to source. If successful, 200 OK status will appear.
    8. Copy the loadID (from the 200 OK message) to monitor the job.
  3. Challenge

    Query the Data

    1. Download the RDF4J client.
    2. Extract the client.
    3. Create a SPARQL repo. Be sure to include your Neptune endpoint and append :8182/sparql at the end.
    4. Open the repo to view the submitted S3 bucket data.
    5. Query the data.

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans