Instead of easier or harder, we can talk about the likelihood of a lossless compression algorithm reducing the size of the data significantly. Certain properties of English language text, and of XML / HTML / source code / mark-up language documents, make this likelihood high. Generally there are 1-3 wasted bits per byte in ASCII-encoded English text, since the alphabet of the document is likely to have somewhere between 32 and 100 unique symbols in it. Symbol use (both letter and word level) in English is not uniformly distributed; some symbols (E, T, A, O, etc) are much more common than others (Z, X, Q, etc) making English text especially suitable for compression using variable-length symbols. The vocabulary of an English document or of "code" which is typically written in a subset of English, independent of document length, is likely to be somewhere between 2000 and 20000 unique words, with an average word length of five letters, suggesting that even a trivial substitution of a two byte code for each word plus a lookup table should result in some compression. A binary file could have all of these same qualities; or, it could represent incompressible white noise.
The latter would be of no use, but the former would be highly compressible and could be useful as a text storage format: it is not too inefficient for a simple textual representation. We can see that there are two ways of compressing a message: By decreasing the size of the file, and by reducing the length of the data. The compression rate depends on the size of a message, and the compression method, but depends much less on the length of the data or the length of the data itself. The first case is the most common scenario, the rest is rare. For example: It is quite possible to compress a file of a maximum size of 10 Bytes, while keeping the exact contents unchanged, with a compression rate of 0.5 Bit/s. It is unlikely to be useful to compress large files: for the typical use case the message.