Elasticsearch autocomplete with a Rails API backend and an Angular frontend
In this post, I’ll share some sample code to integrate a Rails backend API with Elasticsearch and an Angular frontend to implement autocomplete functionality.
To install Elasticsearch, I simply use brew:
brew install elasticsearch
brew services start elasticsearch
Part 1: the Rails backend
Initial project setup:
# create directory and RVM files:
mkdir elasticsearch_demo
echo ruby-2.2.3 > elasticsearch_demo/.ruby-version
echo elasticsearch_demo > elasticsearch_demo/.ruby-gemset
cd elasticsearch_demo
# install rails 5 gem and scaffold a new project
gem install rails
rails new . -d postgresql --skip-action-mailer --skip-action-cable --skip-sprockets --skip-spring --skip-javascript --skip-turbolinks --skip-test --api
# setup database
rake db:create && db:migrate
Add Ruby gems, edit file: Gemfile, add:
gem 'elasticsearch-model', git: 'git://github.com/elasticsearch/elasticsearch-rails.git'
gem 'elasticsearch-rails', git: 'git://github.com/elasticsearch/elasticsearch-rails.git'
gem 'rack-cors', :require => 'rack/cors'
gem 'sidekiq'
Execute bundle install
to install the gems.
Edit file ‘config/application.rb’ to enable elasticsearch logging, add:
require 'elasticsearch/rails/instrumentation'
Create a migration to add the table for People, new file: ‘db/migrate/20160929000649_create_people.rb’
class CreatePeople < ActiveRecord::Migration[5.0]
def change
create_table :people do |t|
t.string :first_name
t.string :last_name
t.timestamps
end
end
end
Execute the migration and create the table via: rake db:migrate
.
Create the person model, new file: app/models/person.rb. Check out the elasticsearch completion suggester documentation for more information.
require 'elasticsearch/model'
class Person < ApplicationRecord
# include elasticsearch
include Elasticsearch::Model
include Elasticsearch::Model::Callbacks
# define elasticsearch index and type for model
index_name 'es_demo_people'
document_type 'person'
# custom elasticsearch mapping per autocompletion
mapping do
indexes :name, type: 'string'
indexes :suggest, {
type: 'completion',
analyzer: 'simple',
search_analyzer: 'simple',
payloads: true
}
end
# simple model validation
validates :first_name, presence: true
validates :last_name, presence: true
# instance method to determine how models are indexed in elasticsearch
def as_indexed_json(_options = {})
{
name: "#{first_name} #{last_name}",
suggest: {
input: [first_name, last_name],
output: "#{first_name} #{last_name}",
payload: { id: id, first_name: first_name, last_name: last_name }
}
}
end
# class method to execute autocomplete search
def self.auto_complete(q)
return nil if q.blank?
search_definition = {
'name-suggest' => {
text: q,
completion: {
field: 'suggest'
}
}
}
__elasticsearch__.client.perform_request('GET', "#{index_name}/_suggest", {}, search_definition).body['name-suggest'].first['options']
end
end
At this point, you can force create the elasticsearch index mapping for the person model:
rails c
> Person.__elasticsearch__.create_index! force: true
The elasticsearch index mapping can be confirmed via: curl http://localhost:9200/es_demo_people/_mapping | python -m json.tool
Now we can create some data. To keep it simple I used Sidekiq. Here is a simple worker file: app/workers/person_creator_worker.rb
class PersonCreatorWorker
include Sidekiq::Worker
def perform(first_name, last_name)
Person.create!(first_name: first_name, last_name: last_name)
end
end
And a rake task to create People from a *nix system dict file using sidekiq. new file: lib/tasks/import.rake
namespace :import do
desc "Import people"
task people: :environment do
words_path = '/usr/share/dict/words'
fail unless File.exists?(words_path)
two_words = []
# iterate over file, one word per line, collect 2 words, and use for person's first and last names
File.readlines(words_path).each do |line|
two_words << line.strip
if two_words.size > 1
PersonCreatorWorker.perform_async(*two_words)
two_words = []
end
end
end
end
Import the data via sidekiq and the new rake task. The import time can vary depending on how many workers and CPUs you use.
# terminal 1
sidekiq
# terminal 2
rake import:people
Now that we have data in elasticsearch, we can make it accessible to the front-end via a new controller. The controller has one GET route that uses the “q” query string param for user input. new file: app/controllers/api/people_controller.rb
class Api::PeopleController < ApplicationController
def auto_complete
render json: Person.auto_complete(person_params[:q])
end
private
def person_params
params.permit(:q)
end
end
Add the controller GET route, edit file: config/routes.rb
Rails.application.routes.draw do
namespace :api do
get 'people/auto_complete'
end
end
Add a basic CORS configuration, edit file: config/initializers/cors.rb
Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins 'localhost:3001'
resource '*',
headers: :any,
methods: [:get]
end
end
This concludes the Rails API backend, start the webserver via: rails s
I put the Elasticsearch Rails Autocomplete source code on GitHub.
Part 2: the Angular frontend
For the Angular frontend, I chose to scaffold my project using the Yeoman Angular Gulp generator. By default, the generator adds sample content, modules, services, controllers, etc. I used the file structure as a guide and then deleted all the NPM packages and code that I didn’t want.
npm install -g yo gulp bower generator-gulp-angular
mkdir elasticsearchAutocomplete && cd $_
yo gulp-angular
# next choose your desired technologies. I chose Angular UI Bootstrap, Sass (Node), standard Javascript/HTML, etc
npm install && bower install
The first file I created was a new autocomplete service to call the Rails API backend. new file: src/app/components/autocomplete/autocomplete.service.js
(function() {
'use strict';
angular
.module('elasticsearchAutocomplete')
.factory('PersonAutocomplete', PersonAutocomplete);
function PersonAutocomplete($http, $log) {
var apiHost = 'http://localhost:3000';
var service = {
apiHost: apiHost,
search: search
};
return service;
function search(q) {
q = q || ''
return $http.get(apiHost + '/api/people/auto_complete?q=' + q)
.then(searchSuccess)
.catch(searchFailure);
function searchSuccess(response) {
if (!response.data) return [];
return response.data;
}
function searchFailure(error) {
$log.error('XHR Failed.\n' + angular.toJson(error.data, true));
}
}
}
})();
Next I added the autocomplete directive. It injects the autocomplete service as a dependency, specifies which html template to use, and implements the controller functionality. new file: src/app/components/autocomplete/autocomplete.directive.js
(function() {
'use strict';
angular
.module('elasticsearchAutocomplete')
.directive('autocomplete', autocomplete);
function autocomplete() {
var directive = {
bindToController: true,
controller: AutocompleteController,
controllerAs: 'vm',
link: autocompleteLink,
restrict: 'E',
templateUrl: 'app/components/autocomplete/autocomplete.html'
};
return directive;
// inject PersonAutocomplete service and $timeout
function AutocompleteController($timeout, PersonAutocomplete) {
var vm = this;
vm.timeout = $timeout;
// method to initialize and reset vm (scope) variables
var reset = function() {
vm.autocompleteResults = [];
vm.clickedResult = undefined;
vm.hasAutocompleteResults = false;
vm.q = undefined;
}
reset();
// method to handle user click event on search results
vm.autocompleteResultClick = function(result) {
vm.clickedResult = JSON.stringify(result, null, 2);
}
// method to clear search results
vm.clearSearch = function() {
reset();
}
// method to execute search and call the service
vm.search = function() {
PersonAutocomplete.search(vm.q).then(function(response){
vm.clickedResult = undefined;
vm.autocompleteResults = response;
vm.hasAutocompleteResults = vm.autocompleteResults.length > 0;
});
}
}
// to manipulate the DOM, implement Angular's link callback:
// @see: https://docs.angularjs.org/guide/directive
function autocompleteLink(scope, element, attrs, vm) {
var autocompleteTimer = undefined;
// use a timeout on key press to delay requests to the service
var keyPressed = function(event) {
if (autocompleteTimer) {
vm.timeout.cancel(autocompleteTimer);
}
autocompleteTimer = vm.timeout(function() {
vm.search();
}, 500);
}
var inputField = element.find('input#search-q');
inputField.on('keyup', keyPressed);
}
}
})();
Here is the basic autocomplete HTML template I used for the markup. new file: src/app/components/autocomplete/autocomplete.html
<form class="form-inline">
<div class="form-group">
<label class="sr-only" for="search-q">Search text</label>
<input type="text" class="form-control" id="search-q" placeholder="Search text" ng-model="vm.q" autocomplete="off">
</div>
<button type="submit" class="btn btn-primary" ng-click="vm.search()">Search</button>
<button type="submit" class="btn btn-default" ng-click="vm.clearSearch()">Clear</button>
</form>
<ul class="list-group" ng-if="vm.hasAutocompleteResults">
<li class="list-group-item" ng-repeat="result in vm.autocompleteResults" ng-click="vm.autocompleteResultClick(result)">
</li>
</ul>
<pre ng-if="vm.clickedResult"></pre>
I then revised the main HTML file to include the new directive, edit file: src/app/main/main.html
<div class="container">
<div>
<acme-navbar></acme-navbar>
</div>
<div class="row">
<div class='col-md-12'>
<autocomplete></autocomplete>
</div>
</div>
</div>
Start the Angular frontend server via: gulp serve
.
Here is a GIF showing all the components working together:
I put the Elasticsearch Angular Autocomplete source code on GitHub.