This paper addresses the problem of globally
balancing colors between images. The input to our algorithm is
a sparse set of desired color correspondences between a source
and a target image. The global color space transformation
problem is then solved by computing a smooth vector field
in CIE Lab color space that maps the gamut of the source to
that of the target. We employ normalized radial basis functions
for which we compute optimized shape parameters based on
the input images, allowing for more faithful and flexible color
matching compared to existing RBF-, regression- or histogrambased
techniques. Furthermore, we show how the basic perimage
matching can be efficiently and robustly extended to
the temporal domain using RANSAC-based correspondence
classification. Besides interactive color balancing for images,
these properties render our method extremely useful for
automatic, consistent embedding of synthetic graphics in video,
as required by applications such as augmented reality.