RobotEyes

Downloads Version HitCount

Visual Regression Library for Robot Framework

Uses Imagemagick to Compare Images and create a diff image. Custom Report to view baseline, actual and diff images. View passed and failed tests. Blur regions (only for selenium) within a page to ignore comparison (helpful when there are dynamic elements like text etc in a page). Support SeleniumLibrary(tested) , Selenium2Library(tested) and AppiumLibrary(not tested).

Requirements

-- Important Imagemagick7: Make sure that you check the Install Legacy Utilities (e.g. convert, compare) check mark in the installation process and that the directory to ImageMagick is in your PATH env variable. Please ensure that compare.exe is in your path env variable. If you still dont see diff images being generated, please downgrade to Imagemagick6

Quick-reference Usage Guide

Keyword Documentation

Keyword Arguments Comments
Open Eyes lib, tolerance Ex open eyes lib=AppiumLibrary tolerance=5
Capture Full Screen tolerance, blur, radius Ex capture full screen tolerance=5 blur=<array of locators> radius=50(thickness of blur)
Capture Element locator, tolerance, blur, radius
Capture Mobile Element locator, tolerance, blur, radius
Scroll To Element locator Ex scroll to element id=user
Compare Images Compares all the images captured in the test with their respective base image

Running Tests

robot -d results -v images_dir:<baseline_images_directory> tests
If baseline image directory does not exist, RobotEyes will create it. If baseline image(s) does not exist, RobotEyes will move the captured image into the baseline directory. For example, when running tests the first time all captured images will be moved to baseline directory passed by you (images_dir)
Important It is mandatory to pass baseline image directory, absence of which will throw an exception.

Directory structure

The RobotEyes library creates a visual_images directory which will contain two additional directories, named actual & diff, respectively.
These directories are necessary for the library to function and are created by it at different stages of the test case (TC) development workflow.
The resulting directory structure created in the project looks as follows:

  • visual_images/
    • actual/
      • name_of_tc1/
        • img1.png
        • img1.png.txt
      • name_of_tc2/
        • img1.png
        • img1.png.txt
      • name_of_tc3/
        • img1.png
        • img1.png.txt
    • diff/
      • name_of_tc1/
        • img1.png
      • name_of_tc2/
        • img1.png
      • name_of_tc3/
        • img1.png

Generating the baseline images

Baseline images will be generated when tests are run the first time. Subsequent test runs will trigger comparison of actual and baseline images.

For example:

*** Settings ***
Library    SeleniumLibrary
Library    RobotEyes

*** Test Cases ***    
Sample visual regression test case  # Name of the example test case
    Open Browser    https://www.google.com/    chrome
    Maximize Browser Window
    Open Eyes    SeleniumLibrary(AppiumLibrary)  5
    Wait Until Element Is Visible    id=lst-ib
    Capture Full Screen
    Compare Images
    Close Browser

Comparing the images

To compare the images, the following needs to exist in the TC's code:

For Example:

*** Settings ***
Library    SeleniumLibrary
Library    RobotEyes

*** Test Cases ***    
Sample visual regression test case  # Name of the example test case
    Open Browser    https://www.google.com/    chrome
    Maximize Browser Window
    Open Eyes    SeleniumLibrary  5
    Wait Until Element Is Visible    id=lst-ib
    Capture Full Screen
    Compare Images
    Close Browser

After the comparison is completed (i.e. the Compare Images keyword in the TC is executed), a difference image will be generated and stored in the diff directory.
Also, a text file will be created containing the result of the comparison between the RMSE (root mean squared error) of the diff image and the tolerance set by the user.
After that, the regular Robot Framework report will raise a failure if the comparison fails.

Another test example

*** Settings ***
Library    SeleniumLibrary
Library    RobotEyes

*** Variables ***
@{blur}    id=body    css=#SIvCob

*** Test Cases ***    
Sample visual regression test case  # Name of the example test case
    Open Browser    https://www.google.com/    chrome
    Maximize Browser Window
    Open Eyes    SeleniumLibrary  5
    Wait Until Element Is Visible    id=lst-ib
    # Below, the optional arguments are the tolerance to override global value, the regions to blur in the image and
    # the thickness of the blur (radius of Gaussian blur applied to the regions) 
    Capture Full Screen    10(tolerance)    ${blur}    50
    Capture Element    id=hplogo
    Compare Images
    Close Browser

Tolerance

Tolerance is the allowed dissimilarity between images. If comparison difference is more than tolerance, the test fails.
You can pass tolerance globally to the open eyes keyword. Ex Open Eyes lib=SeleniumLibrary tolerance=5.
Additionally you can override global tolerance by passing it to Capture Element, Capture Fullscreen keywords.
Ex: Capture Element <locator> tolerance=10 blur=${locators}
Tolerance should range between 1 to 100

Blurring elements from image

You can also blur out unwanted elements (dynamic texts etc) from image to ignore them from comparison. This can help in getting more accurate test results. You can pass a list of locators or a single locator as argument to Capture Element and Capture Full Screen keywords.
Ex: Capture Element <locator> blur=id=test

    @{blur}    id=body    css=#SIvCob
    Capture Element   <locator>  blur=${blur}
    Capture Full Screen     blur=${blur}

Basic Report

Alt text

Basic report should be autogenerated after execution (not supported for pabot). Alternatively, you can generate report by running the following command.

    reportgen --baseline=<baseline image folder> --results=<results folder>

Important: If you want to remotely view the report on Jenkins, you might need to update the CSP setting, Refer: https://wiki.jenkins.io/display/JENKINS/Configuring+Content+Security+Policy#ConfiguringContentSecurityPolicy-HTMLPublisherPlugin

Interactive Report

Robot Eyes generates a report automatically after all tests have been executed. However a more interactive and intuitive flask based report is available.

You can view passed and failed tests and also use this feature to move acceptable actual images to baseline directory. Run eyes server like this. eyes --baseline=<baseline image directory> --results=<outputdir>(leave empty if output is at project root)

Alt text Alt text Alt text

You can move selected images in a testcase by selecting images and clicking on "Baseline Images" button.
You can also move all images of test cases by selecting the test cases you want to baseline and clicking on "Baseline Images" button.

Note: You need to have gevent library installed in the machine to be able to use eyes server.

Pabot users

Visual tests can be executed in parallel using pabot to increase the speed of execution. Generate the report using reportgen --baseline=<baseline images folder> --results=<results folder> after running the tests.

Contributors:

Adirala Shiva Contributed in creating a robotmetrics inspired reporting for RobotEyes.
DiegoSanchezE Added major improvements in the ReadMe.
Priya Contributes by testing and finding bugs/improvements before every release.
Ciaran Doheny Actively testing and suggesting improvements.

Note

If you find this library useful, please do star the repository.
For any issue, feature request or clarification feel free to raise an issue in github or email me at iamjess988@gmail.com