10.6 Adding a Web UI
As a finale to this chapter, let’s add a web UI on top of this small search system as we did in Chapter 3, Integrating dRuby with eRuby. The system so far consists of a crawler, indexer, and Drip acting as middleware. We’ll add WEBrick::HTTPServer
and a servlet as a web UI component. We used WEBrick::CGI
in Reminder CGI interface (remember?), but we will use the HTTPServer
class this time.
Setting all these components as an independent process is a bit difficult to manage. We’ll change their layout to put them all in one process. Changing the layout of process and objects is easy because the interface between processes is almost transparent for any dRuby system. They do just look like normal Ruby objects. Here is the entire script:
demo_ui.rb | |
require './index' |
|
require './crawl' |
|
require 'webrick/cgi' |
|
require 'erb' |
|
|
|
class DemoListView |
|
include ERB::Util |
|
extend ERB::DefMethod |
|
def_erb_method('to_html(word, list)', ERB.new(<<EOS)) |
|
<html><head><title>Demo UI</title></head><body> |
|
<form method="post"> |
|
<input type="text" name="w" value="<%=h word %>" /> |
|
</form> |
|
<% if word %> |
|
<p>search: <%=h word %></p> |
|
<ul> |
|
<% list.each do |fname| %> |
|
<li><%=h fname%></li> |
|
<% end %> |
|
</ul> |
|
<% end %> |
|
</body></html> |
|
EOS |
|
end |
|
|
|
class DemoUICGI < WEBrick::CGI |
|
def initialize(crawler, indexer, *args) |
|
super(*args) |
|
@crawler = crawler |
|
@indexer = indexer |
|
@list_view = DemoListView.new |
|
end |
|
|
|
def req_query(req, key) |
|
value ,= req.query[key] |
|
return nil unless value |
|
value.force_encoding('utf-8') |
|
value |
|
end |
|
|
|
def do_GET(req, res) |
|
if req.path_info == '/quit' |
|
Thread.new do |
|
@crawler.quit |
|
end |
|
end |
|
word = req_query(req, 'w') || '' |
|
list = word.empty? ? [] : @indexer.dict.query(word) |
|
res['content-type'] = 'text/html; charset=utf-8' |
|
res.body = @list_view.to_html(word, list) |
|
end |
|
|
|
alias do_POST do_GET |
|
end |
|
|
|
if __FILE__ == $0 |
|
crawler = Crawler.new |
|
Thread.new do |
|
while true |
|
pp crawler.do_crawl |
|
sleep 60 |
|
end |
|
end |
|
|
|
indexer = Indexer.new |
|
Thread.new do |
|
indexer.update_dict |
|
end |
|
|
|
cgi = DemoUICGI.new(crawler, indexer) |
|
DRb.start_service('druby://localhost:50830', cgi) |
|
DRb.thread.join |
|
end |
You can just use an HTTP server of WEBrick
rather than using the actual web server and CGI.
Here is the WebBrick version:
demo_ui_webrick.rb | |
require './index' |
|
require './crawl' |
|
require 'webrick' |
|
require 'erb' |
|
|
|
class DemoListView |
|
include ERB::Util |
|
extend ERB::DefMethod |
|
def_erb_method('to_html(word, list)', ERB.new(<<EOS)) |
|
<html><head><title>Demo UI</title></head><body> |
|
<form method="post"> |
|
<input type="text" name="w" value="<%=h word %>" /> |
|
</form> |
|
<% if word %> |
|
<p>search: <%=h word %></p> |
|
<ul> |
|
<% list.each do |fname| %> |
|
<li><%=h fname%></li> |
|
<% end %> |
|
</ul> |
|
<% end %> |
|
</body></html> |
|
EOS |
|
end |
|
|
|
class DemoUIServlet < WEBrick::HTTPServlet::AbstractServlet |
|
def initialize(server, crawler, indexer, list_view) |
|
super(server) |
|
@crawler = crawler |
|
@indexer = indexer |
|
@list_view = list_view |
|
end |
|
|
|
def req_query(req, key) |
|
value ,= req.query[key] |
|
return nil unless value |
|
value.force_encoding('utf-8') |
|
value |
|
end |
|
|
|
def do_GET(req, res) |
|
word = req_query(req, 'w') || '' |
|
list = word.empty? ? [] : @indexer.dict.query(word) |
|
res['content-type'] = 'text/html; charset=utf-8' |
|
res.body = @list_view.to_html(word, list) |
|
end |
|
|
|
alias do_POST do_GET |
|
end |
|
|
|
① | if __FILE__ == $0 |
crawler = Crawler.new |
|
Thread.new do |
|
while true |
|
pp crawler.do_crawl |
|
sleep 60 |
|
end |
|
end |
|
|
|
indexer = Indexer.new |
|
Thread.new do |
|
indexer.update_dict |
|
end |
|
|
|
server = WEBrick::HTTPServer.new({:Port => 10080, |
|
:BindAddress => '127.0.0.1'}) |
|
server.mount('/', DemoUIServlet, crawler, indexer, DemoListView.new) |
|
trap('INT') { server.shutdown } |
|
server.start |
|
crawler.quit |
|
end |
There are two new classes in this code. DemoUIServlet
is in charge of the web UI.
DemoListView
is a View
class to render the HTML.
Let’s check a code block in ①. This starts the HTTP server after starting crawl.rb
and index.rb
under subthreads. You can stop crawling by sending a signal like Ctrl-C. The crawler will stop when it becomes idle.
You may wonder what the point is of using Drip if you run a crawler and indexer under one process. Having only one process lets you start the application easily, as if you were starting up a desktop application. It’s easier to daemonize fewer processes.