Donald Knuth famously said, talking about the time and space efficiency of individual fragments of code, that "premature optimization is the root of all evil". For performance critical paths through a code base, if bad performance compromises the business requirements, good performance is an important marker of quality, but in other cases, writing super-optimised code can make things worse on quality criterion that actually matter. Most programmers now get that this specific instance of inappropriate application of a quality criterion is foolish, but still fail to grasp the general principle that other quality criteria are context dependent as well.
What do we really mean when we talk about quality?
Quality is the set of attributes / features that an output has that determine how well it meets the 'business'1 requirements. Quality can be considered not just on the final output (e.g. the final binary that you ship) but on intermediate outputs as well (for example, a source code file, or even the software development process documentation).
Quality criteria are therefore proxies for ensuring that a project is optimal for the business. Setting and achieving appropriate quality criteria helps to ensure that the maximal amount of benefit is captured, and that costs, risks, scope and time are controlled.
Using best practices appropriately can help you to set appropriate quality criteria, because they can provide an expert source of knowledge on what works for achieving certain outcomes, and what doesn't.
So where do many developers get it wrong? They apply best practices like fashion - they take whatever seems to be popular, and stick to them religiously, decrying anything which doesn't conform as wrong and bad.
The first problem with this is that quality is not universal - what is quality for one project is an unnecessary cost for another. A set of best practice guidelines designed to promote reusability might boost quality for a big project with a long life-cycle, but be crazy over-engineering for a script that will only be run once. Following best practice guidelines that aim to achieve something that won't ever affect your business is not quality - it is doing it wrong.
The other problem is that 'best practices' are sometimes just popular practices or fads, and they might not actually help you to meet your business requirements.
So what is the solution? Appropriate planning (even if the plan lives in your head). Sit down and think about what your 'business' requirements are - what is the big picture you are trying to achieve? Now think about what quality criteria actually matter.
When you do this, think about how to optimise the benefits this project will bring, control the risks, reduce the cost, give a better timeline, and so on. Try to think long term if this is relevant to your business requirements.
Your quality criteria are themselves an output, so think about the quality of your quality criteria. Good criteria for assessing your quality criteria are:
- Is there a justification for this quality criterion in terms of my business requirements?
- Is there sound evidence, or at least good quality predictions on solid theory, that it will achieve what I want it to?
- Is this quality criterion achievable for my project?
- Do the benefits / opportunities of meeting this quality criterion outweigh the costs / threats, when assessed against the business requirements?
- Is it specific enough to determine when it has been met?
 'Business' is in quotes because it doesn't just mean making money, it means achieving whatever it was that you set out to do (for example, for a not-for-profit, business might mean raising awareness of an issue; for a hobby Open Source developer, it might mean providing a Free alternative to a proprietary system or raising name recognition).