Skip to content

Latest commit

 

History

History
75 lines (49 loc) · 4.5 KB

watson-cognitive-nodered.md

File metadata and controls

75 lines (49 loc) · 4.5 KB

Watson Cognitive Services from NodeRED

A Simple Image Classifier

When NodeRED is deployed into the IBM Cloud runtime CloudFoundry, it is preconfigured with the node-red-node-watson package - which makes creating flows to exploit Watson Cognitive services like:

  • Visual Recognition - image classification
  • Speech to Text - transcription of audio
  • Text to Speech - synthesis of speech with a varity of voice options
  • Tone Analyzer - identify emotional content of Text
  • Translation - translate between languages
  • Discovery - extract content and metadata from documents
  • Assistant - natural language processing for chatbot operations

The associated nodes appear in the selection palette in the IBM Watson section IBM Watson node list

To give you experience with the Watson services, let's get you set up with a app to load and preview an image, before submitting the image to Watson Visual Recognition to general classification.

The NodeRED flow uses a sample upload form process from Web Code Geeks imbeded in this flow: vr flow

The interface for the app is deliberately simplistic:

start dialog

Click Choose file select image

preview image

Click Classify to invoke Watson Visual Recognition. vr results

Import the flow

You will find the flow discussed here can be imported into your NodeRED environment from nodered-visrec-flow.json.

Simply open the above link, and copy the content of the file to the clipboard.

Then, from the NodeRED menu hamburger menu, open the Import --> Clipboard option import clipboard

Paste the flow from the clipboard into the Import nodes window import nodes and click Import

This will let you drop the flow onto the NodeRED editing canvas.

Prepare the Watson Recognition Service

As with the Machine Learning example, you will need to create a Watson Visual Recognition service instance in your IBM Cloud account

  1. From the IBM Cloud console, select the Catalog option from the top menu bar catalog menu.

  2. Type "visual" in the search/filter area visual filter. Click on the Visual Recognition link.

  3. Wait for the Create button to activate create vr Note you may need to select US South or United Kingdom for the region/location as this may default to Sydney. Click Create and you should see the service overview vr service overview

  4. Copy the API key value to the clipboard, or your favourite scratchpad; this will be needed to configure the NodeRED Visual Recognition node to authorize use of the underlying API.

Breaking it down into stages

  1. The starting /chat node is again an endpoint, which can be invoked from a browser

  2. The template node packages the javascript, and the form needed to perform the preview and upload preparation. form template Note the mustache replacement parameter {{{results}}} -- this allows other flows to inject results into the main page. The second flow uses this to return the classification values from Watson Visual recognition

  3. the second flow starts with the upload.php input node - this is where the form in the first flow posts the image to get it classified. upload flow entry Note the option for accepting file uploads has been checked.

  4. The Switch node set msg.payload is used to load the input buffer to the Watson Visual Recognition node with the first/only image uploaded. set image buffer

  5. The Visual Recognition node will pass the image to the Watson service, and forward the results to the rest of the flow. vr node config Note this is where you need to paste the Watson service API key you copied earlier.

  6. the output from Watson Visual Recognition is a JSON object - it can be formatted using the Mustache template node for display on the main page: vr results (I know, HTML tables are not cool, but they are easy!)