Fast and efﬁcient lossless encoder in image compression with low computation and low memory

This research focuses on frequency-based lossless new encoding technique with low computation and low memory scheme. This encoding technique doesn’t require, any table sim-ilar to Huffman and Golomb Rice encoder, and doesn’t take high computation time like an arithmetic encoder. For better comparative results, nearly 200 standard images were tested with the proposed encoder and their results were compared with the standard encoders. Based on the analysis of results reported in the experimental part, the proposed scheme in Discrete Cosine Transformation (DCT) achieves a high Compression Ratio (CR) of 97.68 %, Bits Per Pixel (BPP) of 0.19, Peak Signal To Noise Ratio (PSNR) of 27.62 dB for pepper image, CR of 98.65%, BPP of 0.11 and PSNR of 29.24 dB for house image, CR of 98.58 %, BPP of 0.11, PSNR of 30.59 dB for pepper image, CR of 98.37%, BPP of 0.13, PSNR of 31.37dB for house image in Discrete Wavelet Transformation (DWT) than compared to other encoders.


INTRODUCTION
In the modern world, data is wealthier and more informative in the digitized format. With the rapid growth of the digitized image, image processing [1,2] is a vital part of data transmission and data storage. It consumes a very huge volume of data. To minimize the volume of bits to signify the image, compression techniques assign the minimum range of bits for each information. It reduces the band's cost of sending the image through a network with different formats of image. The most popular image formats are JPEG and JPEG-2000 [3,4], which are a prominent compression standard. There are broadly applied in numerous image applications and software. Basically, image compression is accessed into lossless and lossy scheme of compression. The decoded image is equivalent to the tested image in lossless compression, and there is no loss of data. These techniques are widely used in medical applications to identify cancer detection, bone morrow identification etc., Huffman encoding, run-length coding, arithmetic encoding, and differential pulse code modulation (DPCM) are some of the lossless techniques in image compression. Content-based image compression LOCO-1 [5], LampelZiv code [6], Golomb-Rice (GR) code [7], JPEG lossless (LS) [8,9], and CALIC lossless coding [10] are some schemes that are reported for lossless image compression. Lossy compression is used to increase the CR to minimize the number of bits to signify the decoded image. The data is lost in the lossy techniques and not equivalent to the input image. Transform coding and fractal compression [11], sub-band coding [12], predictive coding [13], and vector quantization [14], JPEG, and JPEG2000 are well known lossy compression techniques. Image compression involves four stages; namely, transformation, quantization, encoding, and decoding. A mathematical function called transformation is applied to the actual input image to reduce their correlation between the pixels, which makes the image operations are easier to solve than the original image. There are various transformations used in image compression named as follows; Karhunen-Loève transformation (KLT) [15] is the optimal transformation technique and keeps much energy as possible in the result with few coefficients. Walsh Hadamard transformation (WHT) [16] is the fastest data compression method since it requires only addition and subtraction operations. Discrete cosine transformation (DCT) [17] is a great energy compaction and popular blocking transformation technique used in image processing. Due to the blocking artefacts and the increasing compression ratio, discrete wavelet transformation (DWT) [18] is used. It involves multiresolution compression and higher computation complexity when compared to other transformations. Quantization is a high compression technique that reduces the required bits to represent the decoded image. Generally, it reduces the precision of the transformed coefficients more and more in which less number of non-zero coefficients are obtained. Two types of quantization techniques are used in image compression, namely, scalar and vector-based quantization. Scalar quantization (SQ) is the most commonly used simple quantization, denoted as y = Q (x). Quantization function Q maps the 'x' input value to output value 'y'. It considers one sample at a time. Each transformed coefficient is quantized by applying the quantization table. Krishnamoorthy [19] et al. have presented a new integer compression coding technique using SQ with orthogonal polynomials transformation to give good CR. To improve the performance of SQ and lower the average distortion, vector quantization considers the whole vector of data, then applies a look-up table to extract an approximation of the coded vector. Shen et al. [20] have presented a vector quantization (VQ) based compression technique to enhance the standard of vector quantization compressed image and produce lossless compression. Some of the vector quantization techniques are tree-structured vector quantization [21], side match vector quantization (SMVQ) [22], and finite state VQ [23]. After applying quantization, the high-frequency coefficients or alternating current (AC) near to the first element in DCT, called direct current term are almost zero, when divided by the larger quantum values. A major drawback of this technique is the low eminence of the reconstructed image which is known as artefacts. In the final stage, the entropy coding converts the transform coefficients into the binary bitstream and reduces the coding redundancy. Huffman coding and arithmetic coding are the two popular entropy coding techniques utilized in image compression. Huffman encoding is a lossless technique based on statistical and variable-length coding. It allocates lesser codes for more frequently used characters and less frequency takes larger codes for compressing the image and reduces the storage. It refers to the Huffman table to generate bit codes to the source character. So, it takes extra memory to process encoding and decoding. In arithmetic coding, each source symbol can be used to represent the range of bits per character. It differs from an alternative type of encoder reminiscent of Huffman's writing. It divides the input values into part symbols, then it replaces every symbol with a lower code. Due to this process, the encoding part takes high computation time to generate bit codes to the input symbols. Both these techniques are used in lossy and lossless compressions based on the usage of quantization. These encoders take high compu- Decoding process using the reverse process of proposed coder tation time and additional memory requirements to achieve the compression due to reference to the tables or codebooks. To solve this issue, Debin Zhao et al. [24] stated an entropy coder called low complexity and low memory entropy coder (LLEC) that can compress the image with high CR. It takes minimum computation time and a low amount of memory. GR coding technique is a variable-length code like Huffman and it consists of two parts; quotient (Q) and remainder (R). Sugiura et al. [25] have presented a lossless optimal extension of the GR code to attain great compression performance. This scheme consumes more memory in the encoding part by referring to the GR code table. Several new schemes of modified Huffman encoding [26,27] have presented in compression to improve the CR and its performance by a minor modification on Huffman.
To minimize the number of required bits and reconstruct the quality of a processed image, encoding and decoding play an important role. After analysis of these different types of encoding schemes, Huffman and GR coder consume less computation time than compared to arithmetic coder even they referred a table to generate bit codes. Modified Huffman encoding increases its computation time by applying the new idea to the existing technique. In the implementation view, the arithmetic coder takes much computation time and higher CR than compared to all other encoders. High compression ratio, easy and fast implementation, good quality of the decoded image without referring any table are the most challenging tasks for researchers. To address these, a new scheme of encoding technique is presented. The key objective of this proposed scheme attains a   high compression ratio and good eminence of a decoded image with low computation time in terms of CR, mean square error (MSE), and PSNR. By applying the encoding techniques, only the minimum number of bits is assigned to quantized transform coefficients and it is lossless in the reverse process to reconstruct the original values.
The rest of this paper is ordered as follows. Detailed descriptions of the proposed scheme are given in Section 2 and Section 3 presents performance measure. In Section 4, the experimental results and comparative analysis are reported and discussed. Section 5 concludes the paper.

PROPOSED SCHEME: FAST AND EFFICIENT LOSSLESS ENCODER WITH LOW COMPUTATION AND LOW MEMORY
In this section, a fast and efficient lossless encoder with minimum computation and low memory is reported briefly. Initially, the original image with the size of (N × N) is split into (n × n) blocks where n < N, then applied into DCT to reduce the correlation between the pixels, as shown in Equation (1). where I (i,j) and I 1 (i,j) are the original image and its corresponding DCT image respectively. To achieve high CR, each block of I 1 is subjected to SQ by using quantization matrix, as in JPEG is given below; where Q(i,j) is quantized value for the corresponding coefficient DCT(i,j). The qunatum (i,j) values are generated by using the formula, is given below; where qf is a quality factor which is user input, generally in the range of 0-25. Q (i,j) is assigned with the minimum number of bits by reducing the precision or quantity of I 1 (i,j). After applying transformation and quantization, Quantized transform coefficients (QTC) are arranged as one-dimensional array elements using zigzag order to eliminate the continuous zero coefficients of the lower right corner of (8 × 8) block in a 2D block. By eliminating these zeroes, it attains a higher CR by storing up to the last non-zero quantized transform coefficient. Finally, the proposed coder is applied to every block of QTC (8 × 8) and converts the coefficients into bitstream. The proposed coder performs well not only on the blocking sequence of DCT but also on DWT. The proposed scheme in encoding process is shown in Figure 1.
In the reverse process, the lossless proposed coder reconstructs the original bits to denote the decoded image as shown in Figure 2. It attains a great compression with good quality, which is measured by CR and PSNR.
In this proposed work, the code words are uniquely defined by input value and hence, do not need codebook. So, it takes less computation to compress the image when compared to other coders like Huffman, arithmetic, LLEC, and GR coder. To implement a lossless proposed coder to attain high compression, some of the parameters are calculated on the QTC block by (8 × 8). The quantization block is represented as QB. A last non-zero coefficient in QB is represented as P which is considered as the final value. One QB consists of 1 DC and 63 AC coefficients. To assign the minimum bits to DC, DPCM is applied on every QB, as shown in Equation (4) DPCM =DC prev − DC curr (4)  where DCcurr is DC value in the current QB, and DCprev is the DC value in the previous QB. Finally, it is applied to the proposed coder to generate the bitstream. Every transform coefficient 'n' that is converted into the binary bitstream is marked as b. To split even or odd series of b, the length of b is computed and is represented as L. If L is even a series of the length of b, then the lower number of bits in even series 'p' is computed from the formula.
where 'p' number of '0' bit is added in the encoding part followed by '1' bit as the coding is to shorten the correlation between the pixels in which less amount reference bits. Finally, the binary bit of n is added after the reference bits.  If L is an odd series of the length of b, then the lower number of bits in even series 'q' is computed from the formula as where 'q' Number of '1' bit is added in the encoding part followed by '0' bit as reference bits. Finally, the binary bit of n is added after the reference bits. Let us consider n is 17. 'b' of n is 10001 and L of b is 5. L is applied to the odd series equation to compute q value that is 3. So that, the reference bit of n is 1110 followed by 10001 as b of n. Finally, the bitstream of 17 using the proposed coder is 100011110. In case the n is a negative coef-ficient as −12, 'b' of n is 0011(1's complement of 1100), and L of b is 4. L is applied to the even series equation to compute q value that is 2. So that, the reference bit of n is 001 followed by 0011 as b of n. Finally, the bitstream of −12 using the proposed coder is 0010011.
To show the performance of proposed coder, a sample one dimension QTC block after zigzag order with size of (8   proposed coder assigns the minimum number of bits for lower frequency than compared to other coders. It achieves higher CR, when the qf is increased because of higher levels of length 'n' low frequencies are obtained at the table.

Encoding algorithm using proposed coder
The steps involved in the encoding algorithm using the proposed coder are presented in detail below; Input: A monochrome image with the size of (ROW x COL) Output: Encoded bitstream using the proposed scheme Begin Step 1: Divide the input image into (8 × 8) non-overlapping subblocks Step 2: Apply DCT to each block of I (i,j) Step 3: Perform quantization with various qf as mentioned in Section 2Step 4: After quantization, arrange coefficients in a zigzag order scan and find the last non-zero coefficient value Step 5: Apply difference pulse code modulation for DC coefficient of each block Step 6: Apply the proposed coder for converting the coefficients into bitstream as mentioned in Section 2End The proposed coder performs well on DCT and DWT and achieves lossless high compression. In the decoding part, the lossless decoded image is extracted by applying the reverse technique of the proposed coder. More than 200 sample images with a size of (256 × 256) are tested using the proposed scheme and compared to other encoding techniques. The obtained results are analyzed with other encoder schemes in Section 4.

Decoding algorithm using proposed coder
The steps involved in the decoding algorithm using the reverse process of the proposed coder are presented in detail below; Input: Bitstream of the encoded image   Output: Decoded image using the inverse process of the proposed encoding algorithm Begin Step 1: Load bitstream of an image from disk Step 2: Apply the reverse process of a proposed coder that converts the bit-   Step 4: Perform the inverse process of differential DC for each block Step 5: Apply the de-quantization, then rearranged in inverse zigzag order Step 6: Apply the inverse transformation for each block, then merge the partition to obtain the reconstructed image. End

PERFORMANCE MEASURE
The result of the proposed coder is reported in the experimental parts to differentiate with additional standard encoding techniques. To compute the results, some standard performance assessments were exercised. A new scheme has put-forwarded to achieve a higher compression ratio than other encoding techniques which is computed by where DB and OB are represented Decoded Bits and the Original Bits, respectively. The lossless proposed scheme retains the fine standard of the decoded image even if it achieves a high CR. The quality of the resultant image is computed by MSE and PSNR. MSE is described as the cumulative squared error in the middle of the original and reconstructed images. MSE is calcu-lated as, Here X(m, n) and Y(m, n) are the pixel values at position (m, n) of native image and decoded image respectively. M and N are the numbers of arguments in the input images, respectively. PSNR is interpreted as the standard measurement between the native and the decoded image.
where R is the peak signal level of a grayscale image which is taken as 255.  Tables 4, 5 and 6 respectively, and these results are shown in Figures 17, 18, and 19 respectively. GR coder and arithmetic and LLEC coder have experimented with the same standard images in DWT and the acquired outcomes are given in Tables 4-6. It is observed that it gives good quality of decoded images in PSNR equal to Huffman encoding. At an initial stage of quality factors 5 and 10, the proposed scheme does not achieve a better compression ratio while maintaining the same PSNR value as the other coders. At the final stage of qf of 15 and 20, it achieves a very good result of CR, MSE, and PSNR value in DCT and DWT. Still, the predominant challenge in compression is degradation in the quality of decoded image for high compression by applying quantization. In the future, the design of lossless video compression with low computation and low memory will be focused.