We present a typing system with non-idempotent intersection types, typing a term syntax covering three different calculi: the pure {\lambda}-calculus, the calculus with explicit substitutions {\lambda}S, and the calculus with explicit substitutions, contractions and weakenings {\lambda}lxr. In each of the three calculi, a term is typable if and only if it is strongly normalising, as it is the case in (many) systems with idempotent intersections. Non-idempotency brings extra information into typing trees, such as simple bounds on the longest reduction sequence reducing a term to its normal form. Strong normalisation follows, without requiring reducibility techniques. Using this, we revisit models of the {\lambda}-calculus based on filters of intersection types, and extend them to {\lambda}S and {\lambda}lxr. Non-idempotency simplifies a methodology, based on such filter models, that produces modular proofs of strong normalisation for well-known typing systems (e.g. System F). We also present a filter model by means of orthogonality techniques, i.e. as an instance of an abstract notion of orthogonality model formalised in this paper and inspired by classical realisability. Compared to other instances based on terms (one of which rephrases a now standard proof of strong normalisation for the {\lambda}-calculus), the instance based on filters is shown to be better at proving strong normalisation results for {\lambda}S and {\lambda}lxr. Finally, the bounds on the longest reduction sequence, read off our typing trees, are refined into an exact measure, read off a specific typing tree (called principal); in each of the three calculi, a specific reduction sequence of such length is identified. In the case of the {\lambda}-calculus, this complexity result is, for longest reduction sequences, the counterpart of de Carvalho's result for linear head-reduction sequences.