The Internet is becoming central in our life and work. From chatting in Facebook to discovering the cure for cancer, almost every aspect of our life is somehow related to the Internet. Long gone are the days when the original ARPANET was born. Since then, the network has been in constant evolution transforming the initial ARPANET in what we today know as the Internet, an enormous conglomerate of interconnected computer networks. Despite its important role in our life, the knowledge about itsoperation is far from being completely understood. The continuous introduction of new network architectures, protocols and applications during the last decades resulted in an ever evolving entity difficult to study and understand. This has spurred the research community to better analyze the network traffic and bring some light about the complex operation of the Internet. Particularly, a new field of study, usually referred to as traffic classification, has become crucial for understanding the Internet.
The classification of network traffic not only satisfies our curiosity, but it also has many important applications for network operators and IT administrators. BitTorrent, Skype (i.e., P2P), Youtube, Netflix (i.e., streaming) or Megaupload (i.e., direct download) are some examples of network applications that at some point completely changed the paradigms of the Internet. The classification of network traffic helps in many different manners. For instance, studying how new applications impact on the network can help to better plan new infrastructures, architectures or protocols. An accurate classification can also help Internet Service Providers (ISPs) to apply reliable techniques to apply Quality of Service (QoS) policies based on the needs of the applications (e.g., VoIP calls). Finally, this opens a new range of billing possibilities for ISPs to take profit of their infrastructures based on their actual usages.
The desire of network operators would be to be able to accurately classify all the traffic of their networks online. However, the continuous evolution of Internet applications and their techniques to avoid being detected make their identification a very challenging task. Thus, the research community has thrown itself into the search of techniques to accurately identify and classify the traffic. Nevertheless, a wide range of unaddressed functional problems arise when those techniques are applied in real scenarios dealing with tremendous amount of traffic and limited resources.
Image Steganalysis System
Steganalysis is the study of detecting messages hidden using steganography. The goal of steganalysis is to identify suspected packages, determine whether or not they have a payload encoded into them, and, if possible, recover that payload. The problem is generally handled with statistical analysis. A set of unmodified files of the same type, and ideally from the same source as the set being inspected, are analyzed for various statistics. Some of these are as simple as spectrum analysis, but since most image these days are compressed with lossy compression algorithms, such as JPEG they also attempt to look for inconsistencies in the way this data has been compressed.
In SCL, The image processing team have developed a blind image Steganalysis System, many mathematic models, image processing algorithm and machine learning tools are used in developing this system. This system has many outstanding features:
- It is able to inspect in folders and directories and determine the clear and stego images
- The speed of the tool is adapted by computer’s CPU and use maximum capability of system.
- It can be updated by emerging new steganography methods.
- The methods are used in this system are …
- It can be used in network layers and guarantee network security
- It can be used by amateur and experts for different purpose, the setting can be change based on user demand.
- It can be customized for various usage.
Image Fusion Assessment
In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Lots of fusion algorithm been developed in this area and it is still an interesting area for researchers. But the main question here is how to evaluate these methods and choose the best one based on customer requirements.
Using satellite image sensors, there exist two different types of information: panchromatic images in the broad visual wavelength range with high spatial resolution and spectral images in a narrow wavelength range with low spatial resolution. Image fusion algorithms use these two source of information to achieve high spatial and spectral resolution in one single image. The ideal solution in this area improves spatial resolution of spectral images using information of panchromatic image, while keeping spectral information of original image unchanged. Therefore to evaluate image fusion methods it is necessary to propose a quantitative meter to measure the amount of spatial and spectral data preserved in fused image.
In SCL, image fusion assessment project team try to answer this necessity. Several mathematical and image processing methods are used to assess the quality of fused images in spectral and spatial domains. In each area a new robust measure is proposed and the final decision can be made using a weighted sum of these measures. Also this final measure can be customized based on user requirements and the final utilization of fused images.