Analyze KYC Documents with AWS Lambda and Amazon Rekognition in Go
victor
Victor Mwania
September 19, 2023

Analyze KYC Documents with AWS Lambda and Amazon Rekognition in Go

KYC (Know Your Customer) is a process that businesses, especially in the financial and investment industries, are required to undertake to verify their customers and gather information about their financial profile and associated risks. Depending on the level of verification a user achieves, a business may offer various services. For instance, basic verification of identification documents may grant users access to basic-level services.

To perform this verification, a business can choose to manually verify the user, use a third-party vendor that offers KYC as a service, or build a customized KYC service tailored to their specific requirements. In this article, we will explore the latter option, which involves creating a KYC service using AWS with Lambda, Rekognition, and S3, all implemented in the Go programming language.

We will delve into the process of analyzing identity documents and users' selfies, as well as comparing the two documents using Rekognition to obtain verification results.

Prerequisites

  • S3
  • Amazon Rekognition(with access to S3)
  • Lambda

The Lambda

You can find the complete code for this Lambda function in this repository To follow along, initialize the Go project in the kyc-documents-analysis-lambda folder or in a folder of your own choosing using the following command:

$ go mod init kyc-documents-analysis-lambda

Next, import the following two packages that are needed to build this Lambda function:

$ go get -u github.com/aws/aws-lambda-go/lambda
$ go get github.com/aws/aws-sdk-go

main.go

This what our main.go is going to like, we only import the handler and call lambda.Start with the handler

package main

import (
	"github.com/aws/aws-lambda-go/lambda"
	"kyc-documents-analysis-lambda/handler"
)

func main() {
	lambda.Start(handler.Handler)
}

handler.go

Now we write our handler which will live in the handler/ directory. We are going to first check the analyse selfie image, the identity document and then finally compare the selfie and identity document and the return the result of the three checks.

type Result struct {
	SelfieDetails         rekognition.FaceDetail         `json:"selfieDetails"`
	DocumentFaceDetails   rekognition.FaceDetail         `json:"documentFaceDetails"`
	SelfieMatchesDocument rekognition.CompareFacesOutput `json:"selfieMatchesDocument"`
}

Our lambda is going to expect three values in the request body selfieImage, documentImage and bucket name of the s3 bucket with the two documents.

type S3ImageDetailsRequest struct {
	Bucket        string `json:"bucket"`
	SelfieImage   string `json:"selfieImage"`
	DocumentImage string `json:"documentImage"`
}

func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {

	var s3ImageDetails S3ImageDetailsRequest

	err := json.Unmarshal([]byte(request.Body), &s3ImageDetails)

	if err != nil {
		log.Println("Error unmarshalling request body:", err)

		return events.APIGatewayProxyResponse{
			StatusCode: 400,
			Body:       "Invalid request body",
		}, nil
	}
}

We need to to create session to access AWS from the SDK, this will allow us to access Amazon Rekognition.

session, err := session.NewSession(&aws.Config{
		Region: aws.String("us-east-1"),
	})

	if err != nil {
		log.Println("Failed to create AWS session:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error creating rekognition session : %v", err)
	}

	svc := rekognition.New(session)

Check Selfie

To detect faces in an image we use DetectFaces function and this functions needs an input of type DetectFacesInput provided by rekognition

	checkSelfieInput := &rekognition.DetectFacesInput{
		Image: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.SelfieImage),
			},
		},
	}

	checkSelfie, err := svc.DetectFaces(checkSelfieInput)
	if err != nil {
		log.Println("Failed to detect labels:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error check selfie image: %v", err)
	}

	var selfieDetails rekognition.FaceDetail

	if len(checkSelfie.FaceDetails) > 0 {
		// Assume there is one face in the image
		selfieDetails = *checkSelfie.FaceDetails[0]

	}

Check Identity Document

This is going to be the same as check selfie

checkIdentityDocumentInput := &rekognition.DetectFacesInput{
		Image: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.DocumentImage),
			},
		},
	}

	checkIdentityDocument, err := svc.DetectFaces(checkIdentityDocumentInput)

	if err != nil {
		log.Println("Failed to detect labels:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error failed to check identity document : %v", err)

	}

	var identityDocumentFaceDetails rekognition.FaceDetail

	if len(checkIdentityDocument.FaceDetails) > 0 {
		// Assume there is one face in the image
		identityDocumentFaceDetails = *checkIdentityDocument.FaceDetails[0]

	}

Compare Selfie With Document

In this step, we will use the CompareFaces function to compare the detected faces in the selfie and identity document images. This comparison allows us to determine the level of similarity between the two images, which is crucial for KYC verification. We use CompareFaces function which compare faces from the two and get the level of match.

	compareSelfieWithIDDocumentInput := &rekognition.CompareFacesInput{
		SourceImage: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.DocumentImage),
			},
		},
		TargetImage: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.SelfieImage),
			},
		},
	}

	compareSelfieWithIdDocument, err := svc.CompareFaces(compareSelfieWithIDDocumentInput)

	if err != nil {
		log.Println("Failed to detect labels:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error failed to compare selfie with identity document : %v", err)
	}

With all these three analyis we can return a result of the three

	response := Response{
		Message: "KYC Documents Analysis Results",
		Result: Result{
			SelfieDetails:         selfieDetails,
			DocumentFaceDetails:   identityDocumentFaceDetails,
			SelfieMatchesDocument: *compareSelfieWithIdDocument,
		},
	}

	body, err := json.Marshal(response)
	if err != nil {
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error marshalling response: %v", err)
	}

	return events.APIGatewayProxyResponse{
		Headers: map[string]string{
			"Content-Type": "application/json",
		},
		StatusCode: 200,
		Body:       string(body),
	}, nil

With this result we can send the result we can store the result in out database to do the verification or send it through a kafka queue to a consumer which either modifies the result to format needed or do the verification.

Here is the final code of the handler.go file

package handler

import (
	"encoding/json"
	"fmt"
	"log"

	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/rekognition"
)

type S3ImageDetailsRequest struct {
	Bucket        string `json:"bucket"`
	SelfieImage   string `json:"selfieImage"`
	DocumentImage string `json:"documentImage"`
}

type Label struct {
	Confidence float64
	Name       string
}

type Result struct {
	SelfieDetails         rekognition.FaceDetail         `json:"selfieDetails"`
	DocumentFaceDetails   rekognition.FaceDetail         `json:"documentFaceDetails"`
	SelfieMatchesDocument rekognition.CompareFacesOutput `json:"selfieMatchesDocument"`
}

type Response struct {
	Message string `json:"message"`
	Result  Result `json:"result"`
}

func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {

	var s3ImageDetails S3ImageDetailsRequest

	err := json.Unmarshal([]byte(request.Body), &s3ImageDetails)

	if err != nil {
		log.Println("Error unmarshalling request body:", err)

		return events.APIGatewayProxyResponse{
			StatusCode: 400,
			Body:       "Invalid request body",
		}, nil
	}

	session, err := session.NewSession(&aws.Config{
		Region: aws.String("us-east-1"),
	})

	if err != nil {
		log.Println("Failed to create AWS session:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error creating rekognition session : %v", err)
	}

	svc := rekognition.New(session)

	checkSelfieInput := &rekognition.DetectFacesInput{
		Image: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.SelfieImage),
			},
		},
	}

	checkSelfie, err := svc.DetectFaces(checkSelfieInput)
	if err != nil {
		log.Println("Failed to detect labels:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error check selfie image: %v", err)
	}

	var selfieDetails rekognition.FaceDetail

	if len(checkSelfie.FaceDetails) > 0 {
		// Assume there is one face in the image
		selfieDetails = *checkSelfie.FaceDetails[0]

	}

	checkIdentityDocumentInput := &rekognition.DetectFacesInput{
		Image: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.DocumentImage),
			},
		},
	}

	checkIdentityDocument, err := svc.DetectFaces(checkIdentityDocumentInput)

	if err != nil {
		log.Println("Failed to detect labels:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error failed to check identity document : %v", err)

	}

	var identityDocumentFaceDetails rekognition.FaceDetail

	if len(checkIdentityDocument.FaceDetails) > 0 {
		// Assume there is one face in the image
		identityDocumentFaceDetails = *checkIdentityDocument.FaceDetails[0]

	}

	compareSelfieWithIDDocumentInput := &rekognition.CompareFacesInput{
		SourceImage: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.DocumentImage),
			},
		},
		TargetImage: &rekognition.Image{
			S3Object: &rekognition.S3Object{
				Bucket: aws.String(s3ImageDetails.Bucket),
				Name:   aws.String(s3ImageDetails.SelfieImage),
			},
		},
	}

	compareSelfieWithIdDocument, err := svc.CompareFaces(compareSelfieWithIDDocumentInput)

	if err != nil {
		log.Println("Failed to detect labels:", err)
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error failed to compare selfie with identity document : %v", err)
	}

	response := Response{
		Message: "KYC Documents Analysis Results",
		Result: Result{
			SelfieDetails:         selfieDetails,
			DocumentFaceDetails:   identityDocumentFaceDetails,
			SelfieMatchesDocument: *compareSelfieWithIdDocument,
		},
	}

	body, err := json.Marshal(response)
	if err != nil {
		return events.APIGatewayProxyResponse{StatusCode: 500}, fmt.Errorf("error marshalling response: %v", err)
	}

	return events.APIGatewayProxyResponse{
		Headers: map[string]string{
			"Content-Type": "application/json",
		},
		StatusCode: 200,
		Body:       string(body),
	}, nil
}

Conclusion

In conclusion, we've successfully implemented a KYC service using AWS Lambda, Amazon Rekognition, and Go. This service can perform facial analysis on identity documents and selfies, as well as compare them to verify user identities. The results obtained can be further used for user verification or stored in a database for record-keeping.

Thank you for following along with this guide on KYC document analysis with AWS services. If you have any questions or would like to explore additional features, feel free to refer to the AWS Rekognition documentation or AWS SAM CLI for local testing.

Resources