Yes to both, but the relationship isn’t simple.
Some languages can be compiled fast, and others are relatively slow. Without looking closely at the language and how the compiler works there isn’t a simple way to predict if a language will be slow to compile, but there are two major considerations.
Structure - The structure of the language itself impacts compile time. Does it need to include many files (like header files), does it resolve a lot of typing (like C++ parametrics), does it have a complex syntax that requires multiple parsing passes, can processing be done in parallel, or must it be serial, etc?
It depends a lot on what program you’re writing how much the resources will differ between programming languages. So let’s break this into two considerations.
Overhead - The way the language works with memory, objects, functions, and concurrency can introduce overhead. Overhead is stuff that you’ll encounter unless you try hard to avoid it. The more abstract the language the more overhead you tend to have. There also tends to be more overhead in dynamic vs static languages (see compile time). This overhead applies across all the code you write in the language. It may not apply to libraries you use, as they are written in other languages, or have been optimized. This is why it’s not problematic to use inefficient languages in some domains, as they account for only a small portion of the actual code running.
What’s interesting about this is that none of it directly relates to the paradigms that a language covers. The factors could apply to a language in any domain.
And though specific features can make it slow, it’s usually the history of the feature doing that. Complex features can be quite fast, as I’m discovering with my Leaf compiler.