// Wrong approach for text encoding: $finfo = finfo_open(FILEINFO_MIME_ENCODING); echo finfo_file($finfo, 'file.txt'); // "us-ascii" or "utf-8" (unreliable) // Better: read content and detect $content = file_get_contents('file.txt'); echo mb_detect_encoding($content);
For serious work, mb_detect_encoding has limitations. Consider nelexa/encoding or symfony/polyfill-intl-normalizer , but the gold standard is Mozilla’s universalchardet (ported to PHP as jaybizzle/crawler-detect or similar, or use the mbstring strict mode). detect encoding php
// Double-check UTF-8 validity if ($detected === 'UTF-8' && !mb_check_encoding($string, 'UTF-8')) return 'Windows-1252'; // common fallback // Wrong approach for text encoding: $finfo =
function smartEncodingDetect(string $string, array $priorities = ['UTF-8', 'ISO-8859-1', 'Windows-1252']) foreach ($priorities as $encoding) // For UTF-8, validate it strictly if ($encoding === 'UTF-8' && mb_check_encoding($string, 'UTF-8')) return 'UTF-8'; // For others, attempt detection if (mb_detect_encoding($string, $encoding, true) === $encoding) return $encoding; return 'UTF-8'; // safe fallback You import a CSV from a client, scrape
We’ve all been there. You import a CSV from a client, scrape a legacy website, or process an old text file, and suddenly your output looks like é instead of é . Garbage characters. Mojibake.
PHP gives us tools to handle this, but they aren't magic. Let’s look at how to reliably detect encoding—and when you shouldn't rely on detection at all. PHP’s Multibyte String extension (mbstring) provides mb_detect_encoding() . It scans a string and tries to guess the character set.