News

Ethereum’s Vitalik Buterin Supports TiTok as a Blockchain App

Published

on

According to Ethereum (ET) co-founder Vitalik Buterin, the new image compression method Token for Image Tokenizer (TiTok AI) can encode images in a size large enough to add onchain.

On his social media Warpcast account, Buterin called the image compression method a new way of “encoding a profile picture.” He went on to say that if he were able to compress an image to 320 bits, what he called “basically a hash,” it would make the images small enough to be daisy-chained for each user.

The Ethereum co-founder became interested in TiTok AI from an X post made by a researcher at artificial intelligence (AI) imaging platform Leonardo AI.

The researcher, under the name @Ethan_smith_20, briefly explained how the method could help those interested in reinterpreting high-frequency details within images to successfully encode complex images into 32 tokens.

Buterin’s perspective suggests that the method could make it much easier for developers and creators to create profile pictures and non-fungible tokens (NFT).

Fixed previous image tokenization issues

TiTok AI, developed by a collaborative effort between TikTok parent company ByteDance and the University of Munich, is described as an innovative one-dimensional tokenization framework, significantly diverging from the prevailing two-dimensional methods in use.

According to a research paper on the image tokenization method, artificial intelligence allows TiTok to compress 256 x 256 pixel rendered images into “32 distinct tokens.”

The paper highlighted problems encountered with previous image tokenization methods, such as VQGAN. Previously, image tokenization was possible, but strategies were limited to using “2D latent grids with fixed downsampling factors.”

2D tokenization could not circumvent the difficulties in managing redundancies found within images, and neighboring regions showed many similarities.

TiTok, which uses TO THEpromises to solve this problem, using technologies that effectively tokenize images into 1D latent sequences to provide a “compact latent representation” and eliminate region redundancy.

Furthermore, the tokenization strategy could help simplify image storage on blockchain platforms, while offering notable improvements in processing speed.

Furthermore, it boasts speeds up to 410 times faster than current technologies, which represents a huge leap forward in terms of computational efficiency.

Fuente

Leave a Reply

Your email address will not be published. Required fields are marked *

Información básica sobre protección de datos Ver más

  • Responsable: Miguel Mamador.
  • Finalidad:  Moderar los comentarios.
  • Legitimación:  Por consentimiento del interesado.
  • Destinatarios y encargados de tratamiento:  No se ceden o comunican datos a terceros para prestar este servicio. El Titular ha contratado los servicios de alojamiento web a Banahosting que actúa como encargado de tratamiento.
  • Derechos: Acceder, rectificar y suprimir los datos.
  • Información Adicional: Puede consultar la información detallada en la Política de Privacidad.

Trending

Exit mobile version