Open source C++ library
What is RAVL?
RAVL - Recognition And Vision Library - provides a base C++ class library together with a range of computer vision, pattern recognition, audio and supporting tools.
RAVL was originally developed at CVSSP, the Centre for Vision, Speech and Signal Processing, at the University of Surrey, UK. Subsequently it was moved into the public domain to support its use in a wider community.
Some of the features that set RAVL apart from other C++ libraries are:
- SMP/thread-safe reference counting, allowing easy construction of large programs that takes full advantage of multiprocessor servers
- Powerful I/O mechanism, allowing issues for file formats and type conversion to be handled transparently, separately from the main code
- JAVA-like class interfaces which largely avoid the direct use of pointers, allowing code to be written in a clear, readable style
- Easy-to-use and powerful make system suitable for building both large and small projects
RAVL is written in ANSI C++ and is intended to work on a wide range of platforms and compilers. Currently it is actively maintained under:
|Linux||i386||GNU gcc v. 4.4.3|
|Windows||i386||Visual Studio 2005|
In the past it was also maintained under these platforms:
|Solaris||Sparc||GNU gcc v. 3.3|
RAVL is provided under the Lesser GNU Public License.
RAVL is being used by a (small) number of organisations...
- Centre for Vision, Speech, and Signal Processing (University of Surrey, UK)
- Digital Barriers / Omniperception Ltd.
- Advanced Technology Laboratories, Lockheed Martin
If you're using RAVL, please tell us and we'll add you to the list!
Frequently asked questions
Please see the RAVL FAQs on our external site for more information.
RAVL was originally derived from AMMA, written by Radek Marik with help from many other members of CVSSP. The work of porting AMMA to RAVL was largely being undertaken by Charles Galambos, again with help from other members of CVSSP. The RavlMath library includes ccmath, written by Daniel A. Atkinson. RAVL is currently maintained by members of CVSSP and Omniperception.