Новинки:

Сайт подключен к Orphus. Если вы заметили опечатку, выделите слово и нажмите Ctrl+Enter. Спасибо!

[Аналогично=] {HP Labs 2004 Technical Reports} HPL-2004-75 Comparative Analysis of Arithmetic Coding Computational Complexity -- Said, Amir


Сайт о сжатии >> Форум #Компрессор# >> [Ответить] [Ответы]

Автор: Nван Павлов,
13 августа 2004 года в 14:35:32

В ответ на : [link] [не пропадать же добру =] M. Hans and R. W. Schafer. "Lossless compression of digital audio". IEEE Signal Processing Magazine, Volume 18, Issue 4, Pages 21-32, July 2001 от Nван Павлов в 12 августа 2004 года в 15:23:14:


Comparative Analysis of Arithmetic Coding Computational Complexity


Said, Amir

HPL-2004-75
20040430
External

Keyword(s): entropy coding; compression; complexity

Abstract: Some long-held assumptions about the most demanding computations for arithmetic coding are now obsolete due to new hardware. For instance, it is not advantageous to replace multiplication-which now can be done with high precision in a single CPU clock cycle--with comparisons and table-based approximations. A good understanding of the cost of the arithmetic coding computations is needed to design efficient implementations for the current and future processors. In this work we profile these computations by comparing the running times of many implementations, trying to change at most one part at a time, and avoiding small effects being masked by much larger ones. For instance, we test arithmetic operations ranging from 16-bit integers to 48-bit floating point; and renormalization outputs from single bit to 16 bits. To evaluate the complexity of adaptive coding we compare static models and different adaptation strategies. We observe that significant speed gains are possible if we do not insist on updating the code immediately after each coded symbol. The results show that the fastest techniques are those that effectively use the CPU's hardware: full- precision arithmetic, byte output, table look-up decoding, and periodic updating. The comparison with other well known methods shows that the version that incorporates all these accelerations is substantially faster, and that its speed is approaching Huffman coding.

Notes: Copyright IEEE 2004 Published in IEEE Data Compression Conference, 23-25 March 2004, Snowbird, Utah, USA


21 Pages

Ответы:



Ответить на это сообщение

Тема:

Имя (желательно полное):

E-Mail:

URL:

Город:

Страна:

Вежливый и подробный комментарий:
(Форматируйте его, пожалуйста, как почту - короткими строками
Еnter в конце строки, пустая строка между параграфами).

Пожалуйста, заполните все поля.
И не нажимайте по два раза на кнопку! Дождитесь ответа сервера.