Thursday, 30 July 2015

Qt 4 and 5 in same machine (Ubuntu)




"qtchooser -list-versions": get available version of qt.


modify following file 
/usr/lib/x86_64-linux-gnu/qtchooser/5.conf
so that it specify directory of qmake and lib of qt. Example for qt5:
/opt/Qt5.5.0/5.5/gcc_64/bin
/opt/Qt5.5.0/5.5/gcc_64



To choose different qt version: 
"export QT_SELECT=5"

To check current using qt:
"qmake -v" 

"export PKG_CONFIG_LIBDIR=/opt/qt-4.8.7/lib:$PKG_CONFIG_LIBDIR"
"export PATH=/opt/qt-4.8.7/bin:$PATH"
"export PKG_CONFIG_PATH=/opt/qt-4.8.7/lib/pkgconfig/:$PKG_CONFIG_PATH"


Monday, 27 July 2015

Configure AJA device


0. Dependency:
g++, libasound2, libsound2-dev, and libncurses5-dev
For Qt-based demo, qt4-dev-tools is required. qt4-dev-tools doesn't include QtMultimedia which is required by the demo program.

In Ubuntu:
sudo apt-get install libasound2
sudo apt-get install autoconf


1. In terminal, under directory ntv2projects in the SDK. Use make command.

2. Load the device driver: sudo SDK_Dir/bin/loadOEM2K

3. Verify XENA2 is installed using 'lsmod'

------------------------------
Load .ko (driver) from boot (DO NOT WORKED FOR AJA DEVICE):
1. copy the XENA2.ko to /lib/modules/3.16.0-41-generic/kernel/drivers/ntv2
2. edit /etc/modules and add a line "XENA2"
3. "sudo depmod -a"
4. reboot
Make sure the kernel version is the one currently used.
------------------------------

------------------------------
Base on discussion in this link. To avoid linking error of pthread, following lines need to be added to the CMakeLists.txt due to a cmake bug.
  1. set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -pthread")
------------------------------

Thursday, 2 July 2015

OpenCV findCirclesGrid with color filter

1. Convert RGB image to HSV for better colour identification.
As a side note, in OpenCV H has values from 0 to 180, S and V from 0 to 255. The red color, in OpenCV, has the hue values approximately in the range of 0 to 10 and 160 to 180.
In HSV space:
red primary at 0°
green primary at 120° (60 - opencv)
blue primary at 240° (120 - opencv)
yellow (30 - opencv)


// Convert input image to HSV
cv::Mat hsv_image;
cv::cvtColor(bgr_image, hsv_image, cv::COLOR_BGR2HSV);

2. Use cv::inRange function to filter out colour
// Threshold the HSV image, keep only the red pixels
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);

3. Use cv::addWeighted function to filter out colour
cv::Mat red_hue_image;
cv::addWeighted(lower_red_hue_range, 1.0, upper_red_hue_range, 1.0, 0.0, red_hue_image);


Reference:
[1] Detect red circles in an image using OpenCV


Thursday, 4 June 2015

Argon Plasma Coagulation (APC) laparoscopy application


1. Use of Neutral Argon Plasma in the Laparoscopic Treatment of Endometriosis: link
Neutral argon plasma can be utilized as a multi-functional device that has vaporization, coagulation, and superficial cutting capacities with minimal thermal spread and acceptable outcomes. The use of neutral argon plasma appears to be efficacious and safe for the complete treatment of endometriotic implants.

Vaporization: the probe was directed at the lesions from a distance of close to 5mm, spraying the tissue surface at an angle of approximately 45 degrees until the tissue surface demonstrated a characteristic silver, shiny appearance and there was visual confirmation of complete eradication of the lesions
Cutting: closer to the tissue surface at approximately a 90-degree angle

Within the limitations of a pilot study, plasma energy appears to be efficacious and safe for the complete vaporization of endometriotic implants and can be used for the laparoscopic management of endometriosis.

2. Barrett's Oesophagus: Long-term Follow-up After Complete Ablation With Argon Plasma Coagulation: link
laparoscopic fundoplication

3. Argon plasma coagulation for successful treatment of early gastric cancer with intramucosal invasion: link





hematischesis:



Saturday, 30 May 2015

Different ways to estimate relative pose during visual servoing

In order to obtain relative pose (pdHp) between current and desired pattern pose (cHp and cHpd) in camera frame, we can:

Method 1: Find rigid transformation between two set of pattern points (X_p and X_pd) in camera frame directly using a findRigidTransform() function described here.

Method 2: Calculate cHp and cHpd using the findRigidTransform() as the pattern points in local coordinate is known. The relative pose pdHp is calculated as:
pdHp = inv(cHpd) * cHp

Method 3: Calculate cHp and cHpd using the cv::solvePnP() function which finds an object pose from 3D-2D point correspondence. The relative pose is then calculated same as method 2.

According to the results, Method 1 is least stable and accurate as the estimated relative pose varies a lot and very different from the other two methods.
Method 2 and 3 have similar results in which Method 3 seems better since the rotation part is closer to Identity when the relative pose should be the same.

Summary: Method 3 is the best for the time being.

Thursday, 28 May 2015

Relative trajectory playback from different starting pose

1. Given a trajectory in the global frame, a relative trajectory (H_rela) can be generated as:
H_rela = inv(H_1) * H_i
Since the relative transformation is regard to local frame (first pose in the trajectory). H_rela(1) is Identity

2. For a new starting pose in the global frame H_start, a new trajectory begins at the starting pose is generated as:

H_new = H_start * H_rela
where H_new is in the global frame.

Tuesday, 5 May 2015

Git - update submodule

Update and merge/rebase local submodule with upstream:

git submodule update --remote --merge
git submodule update --remote --rebase

Under main project directory, see difference of submodule:
git diff --submodule


3 cases:
  • Same commit but submodule has dirty suffix
-Subproject commit c5c6bbaf616d64fbd873df7b7feecebb81b5aee7
+Subproject commit c5c6bbaf616d64fbd873df7b7feecebb81b5aee7-dirty

It means that git status in the submodule isn't clean. cd into the submodule and see what's going on.

  • Different commit, former one is correct (the one in main project)
-Subproject commit c4478af032e604bed605e82d04a248d75fa513f7
+Subproject commit c79d9c83c2864665ca3fd0b11e20a53716d0cbb0
If the former is right, add and commit the new version of the submodule in main project
  • Different commit, latter one is correct (the one in submodule)
If the latter is right, change the version that the submodule is at with:
git submodule update $submodule_name$






Reference:
[1] http://git-scm.com/book/en/v2/Git-Tools-Submodules
[2] http://stackoverflow.com/questions/6006494/git-submodule-modified-files-status

Monday, 23 March 2015

Angular velocity and Skew Symmetric Matrices

A matrix S is said to be skew symmetric if and only if: S^T + S = 0.
Thus S contains only three independent entries and every 3 × 3 skew symmetric matrix has the form:


If a = (a_x, a_y, a_z)' is a 3-vector, the skew symmetric matrix S(a) can be defined as:

Important properties of the matrix S(a):
  • Linearity: for any vector a and b belonging to R^3 and scalars alpha and beta:
    • S(alpha*a + beta*b) = alpha*S(a) + beta*S(b)
  • Calculation of cross project: for any vector p = (p_x, p_y, p_z)':
    • S(a)*p = a.cross(p)
  • For an orthogonal matrix (such as rotation) R in SO(3), and a,b are vector in R^3:
    • R(a.cross(b)) = (R*a).cross(R*b)
  • For R in SO(3) and a vector a belongs to R^3, we have:
    • the deviation is:





  • Computing the derivative of the rotation matrix R is equivalent to a matrix multiplication by a skew symmetric matrix S, that is:

Velocity of a point on a rotating rigid body which is moving with a linear velocity is derived in:

Monday, 16 March 2015

Paper note: Robust Jacobian Estimation for Uncalibrated Visual Servoing

This paper propose a robust Jacobian estimation for uncalibrated visual servoing. For the term "uncalibrated", authors refer to model-free and nonparametric Jacobian (i.e. non-analytic form).
To estimate the Jacobian, a Broyden rank-one secant update has been proposed which requires a good initial guess. Farahmand et al. propose local least-squares (LLS) estimation to utilize the memory of visual-motor data. This method estimated the Jacobian of any point in the workspace directly from raw visual-motor data in a close neighborhood (K-NN) of the point under consideration.
The following equation consider the Jacobian estimation as a minimization problem:



It is pointed out that the least-squares (LS) estimator is not robust to outliers as its weight function assign weight equally to all data including outliers. The L1-norm is more robust but both have the least possible breakdown point (BDP). The BDP refers to  the smallest proportion of incorrect samples that the estimator can tolerate before they arbitrarily affect the model fitting. In other words, BDP of an estimator is a measure of its resistance to outliers. The maximum BDP is 50% where outliers and inliers have equal amount. In the paper, two other M-estimators with a redescending  influence function, Tukey’s Biweight (BW) function and Geman-McClure (GM) estimator, is investigated. The tex2html_wrap_inline3570 function for GM estimator is written as:


For other type of M-estimators, refer to this link. Note that the formulation of the GM estimator in the paper is a bit different from the link, more investigation needed.



The scale parameter σ quantifies how the probability distribution is spread. For example, variance is a measure of scale for the normal distribution. To estimate a scale for a M-estimator, the paper uses Median Absolute Deviation (MAD), which has the highest possible BDP of 50% and a bounded influence function and is computationally efficient, regardless its low Gaussian efficiency (37%).
More info about MAD and other way to measure scale of data see: link

A common method to solve (6) is the Iteratively Reweighted Least Squares (IRLS) that is widely used as an efficient implementation of robust M-estimator in nonlinear optimization domains. The IRLS used in the paper for the Jacobian estimation is shown as:


The algorithm presented in the paper can be summarized as follows:

A.1 Initialize visual-motor memory
A.2 Determine neighbors: K-NN
A.3 Estimate initial scale: Use MAD to find initial measure of scale σ
A.4 Find initial weights: Initialize weight matrix W0 according to the found norm and scale.
A.5 Estimate the Jacobian: Use JACOBIANESTIRLS
A.6 Update control signal
A.7 Update memory: The new visual-motor pair is added to the memory for later use P = P +1 (All pairs are kept?)
A.8 Goto step A.2










Reference:
[1] A. Shademan, A. M. Farahmand, and M. Jägersand, “Robust Jacobian estimation for uncalibrated visual servoing,” Proc. - IEEE Int. Conf. Robot. Autom., pp. 5564–5569, 2010.