Professor Raouf Hamzaoui

Job: Professor in Media Technology

Faculty: Computing, Engineering and Media

School/department: School of Engineering and Sustainable Development

Research group(s): Institute of Engineering Sciences

Address: De Montfort University, The Gateway, Leicester, LE1 9BH

T: +44 (0)116 207 8096

E: rhamzaoui@dmu.ac.uk

W: http://www.tech.dmu.ac.uk/~hamzaoui/

 

Personal profile

Raouf Hamzaoui received the MSc degree in mathematics from the University of Montreal, Canada, in 1993, the Dr.rer.nat. degree from the University of Freiburg, Germany, in 1997 and the Habilitation degree in computer science from the University of Konstanz, Germany, in 2004. He was an Assistant Professor with the Department of Computer Science of the University of Leipzig, Germany and with the Department of Computer and Information Science of the University of Konstanz. In September 2006, he joined DMU where he is a Professor in Media Technology and Head of the Signal Processing and Communications Systems Group in the Institute of Engineering Sciences. Raouf Hamzaoui is an IEEE Senior member. He was a member of the Editorial Board of the IEEE Transactions on Multimedia and IEEE Transactions on Circuits and Systems for Video Technology. He has published more than 100 research papers in books, journals, and conferences. His research has been funded by the EU, DFG, Royal Society, and industry and received best paper awards (ICME 2002, PV’07, CONTENT 2010, MESM’2012, UIC-2019).

Research group affiliations

Institute of Engineering Sciences (IES)

 

Publications and outputs

  • PU-Refiner: A Geometry Refiner with Adversarial Learning for Point Cloud Upsampling
    PU-Refiner: A Geometry Refiner with Adversarial Learning for Point Cloud Upsampling Liu, Hao; Yuan, Hui; Hamzaoui, Raouf; Gao, Wei; Li, Shuai We present PU-Refiner, a generative adversarial network for point cloud upsampling. The generator of our network includes a coarse feature expansion module to create coarse upsampled features, a geometry generation module to regress a coarse point cloud from the coarse upsampled features, and a progressive geometry refinement module to restore the dense point cloud in a coarse-to-fine fashion based on the coarse upsampled point cloud. The discriminator of our network helps the generator produce point clouds closer to the target distribution. It makes full use of multi-level features to improve its classification performance. Extensive experimental results show that PU-Refiner is superior to five state-of-the-art point cloud upsampling methods. Code: https://github.com/liuhaoyun/PU-Refiner Liu, H., Yuan, H., Hamzaoui, R., Gao, W., Li, S. (2022) PU-Refiner: A Geometry Refiner with Adversarial Learning for Point Cloud Upsampling. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2022), Singapore and Shenzhen, May 2022.
  • No-reference Bitstream-layer Model for Perceptual Quality Assessment of V-PCC Encoded Point Clouds
    No-reference Bitstream-layer Model for Perceptual Quality Assessment of V-PCC Encoded Point Clouds Liu, Qi; Su, Honglei; Chen, Tianxin; Yuan, Hui; Hamzaoui, Raouf No-reference bitstream-layer models for point cloud quality assessment (PCQA) use the information extracted from a bitstream for real-time and nonintrusive quality monitoring. We propose a no-reference bitstream-layer model for the perceptual quality assessment of video-based point cloud compression (V-PCC) encoded point clouds. First, we describe the fundamental relationship between perceptual coding distortion and the texture quantization parameter (TQP) when geometry encoding is lossless. Then, we incorporate the texture complexity (TC) into the proposed model while considering the fact that the perceptual coding distortion of a point cloud depends on the texture characteristics. TC is estimated using TQP and the texture bitrate per pixel (TBPP), both of which are extracted from the compressed bitstream without resorting to complete decoding. Then, we construct a texture distortion assessment model upon TQP and TBPP. By combining this texture distortion model with the geometry quantization parameter (GQP), we obtain an overall no-reference bitstream-layer PCQA model that we call bitstreamPCQ. Experimental results show that the proposed model markedly outperforms existing models in terms of widely used performance criteria, including the Pearson linear correlation coefficient (PLCC), the Spearman rank order correlation coefficient (SRCC) and the root mean square error (RMSE). The dataset developed in this study is publicly available at https://github.com/qdushl/Waterloo-Point-Cloud-Database-3.0. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Liu, Q., Su, H., Chen, T., Yuan, H. and Hamzaoui, R. (2022) No-reference Bitstream-layer Model for Perceptual Quality Assessment of V-PCC Encoded Point Clouds. IEEE Transactions on Multimedia,
  • Large-scale crowdsourced subjective assessment of picturewise just noticeable difference
    Large-scale crowdsourced subjective assessment of picturewise just noticeable difference Lin, Hanhe; Chen, Guangan; Jenadeleh, Mohsen; Hosu, Vlad; Reips, Ulf-Dietrich; Hamzaoui, Raouf; Saupe, Dietmar The picturewise just noticeable difference (PJND) for a given image, compression scheme, and subject is the smallest distortion level that the subject can perceive when the image is compressed with this compression scheme. The PJND can be used to determine the compression level at which a given proportion of the population does not notice any distortion in the compressed image. To obtain accurate and diverse results, the PJND must be determined for a large number of subjects and images. This is particularly important when experimental PJND data are used to train deep learning models that can predict a probability distribution model of the PJND for a new image. To date, such subjective studies have been carried out in laboratory environments. However, the number of participants and images in all existing PJND studies is very small because of the challenges involved in setting up laboratory experiments. To address this limitation, we develop a framework to conduct PJND assessments via crowdsourcing. We use a new technique based on slider adjustment and a flicker test to determine the PJND. A pilot study demonstrated that our technique could decrease the study duration by 50% and double the perceptual sensitivity compared to the standard binary search approach that successively compares a test image side by side with its reference image. Our framework includes a robust and systematic scheme to ensure the reliability of the crowdsourced results. Using 1,008 source images and distorted versions obtained with JPEG and BPG compression, we apply our crowdsourcing framework to build the largest PJND dataset, KonJND-1k (Konstanz just noticeable difference 1k dataset). A total of 503 workers participated in the study, yielding 61,030 PJND samples that resulted in an average of 42 samples per source image. The KonJND-1k dataset is available at http://database.mmsp-kn.de/konjnd-1kdatabase.html TRR 161 (Project A05) The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. H. Lin, G. Chen, M. Jenadeleh, V. Hosu, U. Reips, R. Hamzaoui, D. Saupe, (2022) Large-scale crowdsourced subjective assessment of picturewise just noticeable difference. IEEE Transactions on Circuits and Systems for Video Technology,
  • Kalman filter-based prediction refinement and quality enhancement for geometry-based point cloud compression
    Kalman filter-based prediction refinement and quality enhancement for geometry-based point cloud compression Wang, Lu; Sun, Jian; Yuan, Hui; Hamzaoui, Raouf; Wang, Xiaohui Abstract—A point cloud is a set of points representing a three-dimensional (3D) object or scene. To compress a point cloud, the Motion Picture Experts Group (MPEG) geometry-based point cloud compression (G-PCC) scheme may use three attribute coding methods: region adaptive hierarchical transform (RAHT), predicting transform (PT), and lifting transform (LT). To improve the coding efficiency of PT, we propose to use a Kalman filter to refine the predicted attribute values. We also apply a Kalman filter to improve the quality of the reconstructed attribute values at the decoder side. Experimental results show that the combination of the two proposed methods can achieve an average Bjontegaard delta bitrate of -0.48%, -5.18%, and -6.27% for the Luma, Chroma Cb, and Chroma Cr components, respectively, compared with a recent G-PCC reference software. L. Wang, J. Sun, H. Yuan, R. Hamzaoui, X. Wang, Kalman filter-based prediction refinement and quality enhancement for geometry-based point cloud compression, to appear in: Proc. Visual Communications and Image Processing (VCIP 2021), Munich, Dec. 2021.
  • Optimized Dynamic Point Cloud Compression OPT-PCC: Report on experimental results
    Optimized Dynamic Point Cloud Compression OPT-PCC: Report on experimental results Yuan, Hui; Hamzaoui, Raouf; Neri, Ferrante; Yang, Shengxiang Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. This deliverable reports on the work undertaken in this project to achieve objective O3. The bitrates and distortions were computed for the quantization steps obtained as solutions of the optimization problem for a given target bitrate. Section 1 evaluates the rate-distortion performance of the optimization algorithms developed to achieve objective O2 when the dynamic point cloud consists of one group of frames. Section 2 considers the case when the dynamic point cloud consists of two groups of frames. Each time, two algorithms are evaluated: one where the optimization is carried out with differential evolution (DE) for analytical models of the rate and distortion functions (we call this solution model-based DE solution) and one where the optimization is carried out with DE for the actual rate and distortion functions (we call this solution encoding-based DE solution). To assess the performance of a solution, we compute the Bjøntegaard delta (BD) rate and BD distortion with respect to the state-of-the art method. For the color distortion, we considered only the luminance component. Moreover, we evaluate the bit allocation accuracy by calculating the bitrate error (BE) = |R_a-R_T |/R_a ×100%, where R_a and R_T are the actual bitrate computed by the method and the target bitrate, respectively. Results are reported for six dynamic point clouds (longdress, redandblack, loot, soldier, queen, basketballplayer) and for V-PCC Test Model TMC2 v12.0, which relies on the High Efficiency Video Coding Test Model Version 16. The computer codes used to generate the results are available at http://doi.org/10.5281/zenodo.5034575 and https://doi.org/10.5281/zenodo.5211174 for the one group of frames case and at https://doi.org/10.5281/zenodo.5552760 for the two groups of frames case. Yuan, H., Hamzaoui, R., Neri, F. and Yang, S. (2021) Optimized Dynamic Point Cloud Compression (OPT-PCC): Report on experimental results. Deliverable D4 of the Optimized Dynamic Point Cloud Compression (OPT-PCC) project.
  • Proposal to the MPEG 3DG Standardization Committee
    Proposal to the MPEG 3DG Standardization Committee Yuan, Hui; Hamzaoui, Raouf; Neri, Ferrante; Yang, Shengxiang Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: 1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; 4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. This deliverable is a proposal to the MPEG 3D Graphics Coding standardization committee, which was submitted on 27 June 2021 and presented to the committee on 13 July 2021 at the 4th WG7 Meeting. The proposal presents results from work undertaken as part of objectives O1, O2, and O3. Yuan, H., Hamzaoui, R., Neri, F. and Yang, S. (2021) Proposal to the MPEG 3DG Standardization Committee. Deliverable D5.2 of the Optimized Dynamic Point Cloud Compression (OPT-PCC) project. November 2021.
  • Global Rate-distortion Optimization of Video-based Point Cloud Compression with Differential Evolution
    Global Rate-distortion Optimization of Video-based Point Cloud Compression with Differential Evolution Yuan, Hui; Hamzaoui, Raouf; Neri, Ferrante; Yang, Shengxiang; Wang, Tingting In video-based point cloud compression (V-PCC), one geometry video and one color video are generated from a dynamic point cloud. Then, the two videos are compressed independently using a state-of-the-art video coder. In the Moving Picture Experts Group (MPEG) V-PCC test model, the quantization parameters for a given group of frames are constrained according to a fixed offset rule. For example, for the low-delay configuration, the difference between the quantization parameters of the first frame and the quantization parameters of the following frames in the same group is zero by default. We show that the rate-distortion performance of the V-PCC test model can be improved by lifting this constraint and considering the rate-distortion optimization problem as a multi-variable constrained combinatorial optimization problem where the variables are the quantization parameters of all frames. To solve the optimization problem, we use a variant of the differential evolution algorithm. Experimental results for the low-delay configuration show that our method can achieve a Bj{\o}ntegaard delta bitrate of up to -43.04% and more accurate rate control (average bitrate error to the target bitrate of 0.45% vs. 10.75%) compared to the state-of-the-art method, which optimizes the rate-distortion performance subject to the test model default offset rule. We also show that our optimization strategy can be used to improve the rate-distortion performance of two-dimensional video coders. Marie Skłodowska-Curie Action Project name: Optimized Dynamic Point Cloud Compression, acronym: OPT-PCC, grant number 836192; The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Yuan, H., Hamzaoui, R., Neri, F., Yang, S. and Wang, T. (2021) Global rate-distortion optimization of video-based point cloud compression with differential evolution. In: Proc. 23rd International Workshop on Multimedia Signal Processing (IEEE MMSP 2021), Tampere, Oct. 2021.
  • Coder Source Code
    Coder Source Code Yuan, Hui; Hamzaoui, Raouf; Neri, Ferrante; Yang, Shengxiang Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: 1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; 4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. As part of O3, this deliverable gives the source code of the algorithms used in the project to optimize the rate-distortion performance of V-PCC. Yuan, H., Hamzaoui, R., Neri, F. and Yang, S. (2021) Coder source code. Deliverable D1 of the Optimized Dynamic Point Cloud Compression (OPT-PCC) project. October 2021.
  • Report on the Bit Allocation Solution
    Report on the Bit Allocation Solution Yuan, Hui; Hamzaoui, Raouf; Neri, Ferrante; Yang, Shengxiang Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: 1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; 4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. This deliverable reports on the work undertaken in this project to achieve objective O2. Section 1 introduces the rate-distortion optimization problem for V-PCC. Section 2 reviews previous work. Section 3 presents our fast search algorithms. Section 4 gives experimental results. Section 5 gives our conclusions. Yuan, H., Hamzaoui, R., Neri, F. and Yang, S. (2021) Report on the bit allocation solution. Deliverable D3 of the Optimized Dynamic Point Cloud Compression (OPT-PCC) project. August 2021.
  • Adaptive Quantization for Predicting Transform-based Point Cloud Compression
    Adaptive Quantization for Predicting Transform-based Point Cloud Compression Wang, Xiaohui; Sun, Guoxia; Yuan, Hui; Hamzaoui, Raouf; Wang, Lu The representation of three-dimensional objects with point clouds is attracting increasing interest from researchers and practitioners. Since this representation requires a huge data volume, effective point cloud compression techniques are required. One of the most powerful solutions is the Moving Picture Experts Group geometry-based point cloud compression (G-PCC) emerging standard. In the G-PCC lifting transform coding technique, an adaptive quantization method is used to improve the coding efficiency. Instead of assigning the same quantization step size to all points, the quantization step size is in-creased according to level of detail traversal order. In this way, the attributes of more important points receive a finer quantization and have a smaller quantization error than the attributes of less important ones. In this paper, we adapt this approach to the G-PCC predicting transform and propose a hardware-friendly weighting method for the adaptive quantization. Experimental results show that compared to the current G-PCC test model, the proposed method can achieve an average Bjøntegaard delta rate of -6.7%, -14.7%, -15.4%, and -10.0% for the luma, chroma Cb, chroma Cr, and reflectance components, respectively on the MPEG Cat1-A, Cat1-B, Cat3-fused and Cat3-frame datasets. X. Wang, G. Sun, H. Yuan, R. Hamzaoui, and L. Wang, (2021) Adaptive quantization for predicting transform-based point cloud compression. In: Proc. 11th International Conference on Image and Graphics (ICIG 2021), Haikou, China, August 2021.

Click here for a full listing of Raouf Hamzaoui's publications and outputs.

Key research outputs

  • Ahmad, S., Hamzaoui, R., Al-Akaidi, M., Adaptive unicast video streaming with rateless codes and feedback, IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, pp. 275-285, Feb. 2010.
  • Röder, M., Cardinal, J., Hamzaoui, R., Efficient rate-distortion optimized media streaming for tree-structured packet dependencies, IEEE Transactions on Multimedia, vol. 9, pp. 1259-1272, Oct. 2007.  
  • Röder, M., Hamzaoui, R., Fast tree-trellis list Viterbi decoding, IEEE Transactions on Communications, vol. 54, pp. 453-461, March 2006.
  • Röder, M., Cardinal, J., Hamzaoui, R., Branch and bound algorithms for rate-distortion optimized media streaming, IEEE Transactions on Multimedia, vol. 8, pp. 170-178, Feb. 2006.
  • Stankovic, V., Hamzaoui, R., Xiong, Z., Real-time error protection of embedded codes for packet erasure and fading channels, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, pp. 1064-1072, Aug. 2004.
  • Stankovic, V., Hamzaoui, R., Saupe, D., Fast algorithm for rate-based optimal error protection of embedded codes, IEEE Transactions on Communications, vol. 51, pp. 1788-1795, Nov. 2003.
  • Hamzaoui, R., Saupe, D., Combining fractal image compression and vector quantization, IEEE Transactions on Image Processing, vol. 9, no. 2, pp. 197-208, 2000.
  • Hamzaoui, R., Fast iterative methods for fractal image compression, Journal of Mathematical Imaging and Vision 11,2 (1999) 147-159.

 

Research interests/expertise

  • Image and Video Compression
  • Multimedia Communication
  • Error Control Systems
  • Image and Signal Processing
  • Pattern Recognition
  • Algorithms

Areas of teaching

Signal Processing

Image Processing

Data Communication

Media Technology

Qualifications

Master’s in Mathematics (Faculty of Sciences of Tunis), 1986

MSc in Mathematics (University of Montreal), 1993

Dr.rer.nat (University of Freiburg), 1997

Habilitation in Computer Science (University of Konstanz), 2004

Courses taught

Digital Signal Processing

Mobile Communication 

Communication Networks

Signal Processing

Multimedia Communication

Digital Image Processing

Mobile Wireless Communication

Research Methods

Pattern Recognition

Error Correcting Codes

Honours and awards

Outstanding Associate Editor Award, IEEE Transactions on Multimedia, 2020

Certificate of Merit for outstanding editorial board service, IEEE Transactions on Multimedia, 2018

Best Associate Editor award, IEEE Transactions on Circuits and Systems for Video Technology, 2014

Best Associate Editor award, IEEE Transactions on Circuits and Systems for Video Technology, 2012

Membership of professional associations and societies

IEEE Senior Member

IEEE Signal Processing Society

IEEE Multimedia Communications Technical Committee 

British Standards Institute (BSI) IST/37 committee 

Current research students

Sergun Ozmen, PT PhD student since July 2019

Mohamed Al-Ibaisi, PT PhD student since January 2017

 

Professional esteem indicators

Guest Editor IEEE Open Journal of Circuits and Systems, Special Section on IEEE ICME 2020.

Guest Editor IEEE Transactions on Multimedia, Special Issue on Hybrid Human-Artificial Intelligence for Multimedia Computing.

Editorial Board Member Frontiers in Signal Processing (2021-) 

Editorial Board Member IEEE Transactions on Multimedia (2017-2021)

Editorial Board Member IEEE Transactions on Circuits and Systems for Video Technology (2010-2016)

Area Chair, IEEE ICIP 2021, Anchorage, September 2021

Area Chair for Multimedia Communications, Networking and Mobility, IEEE ICME 2021, Shenzhen, July 2021

Workshops Co-Chair, IEEE ICME 2020, London, July 2020.

Technical Program Committee Co-Chair, IEEE MMSP 2017, London-Luton, Oct. 2017.