WARNING, LONG ANSWER AHEAD. SUMMARY:
Underlying issue is known but might not see much traction
Easy fix is to use a type assertion return { ...arg, x: "" } as T;
Easy fix isn't completely safe and has bad outcome in some edge cases
In any case, g() doesn't infer T properly
Refactored g() function at bottom might be better for you
I need to stop writing so much
The main issue here is that the compiler is simply is not clever enough to verify some equivalences for generic types.
// If you use CompilerKnowsTheseAreTheSame and it compiles,
// then T and U are known to be mutually assignable by the compiler
// If you use CompilerKnowsTheseAreTheSame and it gives an error,
// then T and U are NOT KNOWN to be mutually assignable by the compiler,
// even though they might be known to be so by a clever human being
type CompilerKnowsTheseAreTheSame = T;
// The compiler knows that Picking all keys of T gives you T
type PickEverything =
CompilerKnowsTheseAreTheSame>; // okay
// The compiler *doesn't* know that Omitting no keys of T gives you T
type OmitNothing =
CompilerKnowsTheseAreTheSame>; // nope!
// And the compiler *definitely* doesn't know that you can
// join the results of Pick and Omit on the same keys to get T
type PickAndOmit =
CompilerKnowsTheseAreTheSame & Omit>; // nope!
Why isn't it clever enough? In general there are two broad classes of answer for this:
The type analysis in question relies on some human cleverness which is difficult or impossible to capture in compiler code. Until the Singularity happens and the TypeScript compiler becomes fully sapient, there are going to be some things that you can reason about that the compiler just can't.
The type analysis in question is relatively straightforward to perform by the compiler. But doing so takes some time and will probably have some negative impact on performance. Does it improve the developer experience enough to be worth the cost? The answer is often, unfortunately, no.
In this case it's probably the latter. There is an issue in Github about it, but I wouldn't expect to see much work on it unless lots of people start clamoring for it.
Now, for any concrete type, the compiler will generally be able to go through and evaluate the concrete types involved and verify the equivalences:
interface Concrete {
a: string,
b: number,
c: boolean
}
// okay now
type OmitNothingConcrete =
CompilerKnowsTheseAreTheSame>;
// nope, still too generic
type PickAndOmitConcrete =
CompilerKnowsTheseAreTheSame & Omit>;
// okay now
type PickAndOmitConcreteKeys =
CompilerKnowsTheseAreTheSame & Omit>;
But in your case you are trying to get it to happen with generic T, which is not going to happen automatically.
When you know more about the types involved than the compiler does, chances are that you might need a judicious use of a type assertion, which are part of the language for just such a case:
function g(arg: Omit): T {
return { ...arg, x: "" } as T; // no error now
}
There, it compiles now, and you're done, right?
Well, let's not be too hasty. One of the pitfalls with using type assertions is that you are telling the compiler not to worry about verifying something when you know for sure that what you are doing is safe. But do you know that? It depends on whether you expect to see some edge cases. Here's the one that worries me the most about your example code.
Let's say I have a disciminated union type U, which is meant to either hold an a property or a b property, depending on the string literal value of the x property:
// discriminated union U
type U = { x: "a", a: number } | { x: "b", b: string };
declare const u: U;
// check discriminant
if (u.x === "a") {
console.log(u.a); // okay
} else {
console.log(u.b); // okay
}
No problem, right? But wait, U extends A, because any value of type U should also be a value of type A. That means I can call g like this:
// notice that the following compiles with no error
const oops = g({ a: 1 });
// oops is supposed to be a U, but it's not!
oops.x; // is "a" | "b" at compile time but "" at runtime!
The value {a: 1} is assignable to Omit, and therefore the compiler thinks it has produced a value oops of the type U. But it isn't, is it? You know that oops.x will be neither "a" nor "b" at runtime, but rather "". We've lied to the compiler and now we will get into trouble later when we start using oops.
Now maybe such an edge case is not going to happen to you, and if so, you shouldn't worry about it much... after all, the typing is supposed to make maintaining the code easier, not harder.
Finally I want to mention that the g() function as typed will never be able to infer a type for T that is any narrower than A. If you call g({a: 1}), T will be inferred as A. If T is always inferred as A then you might as well not even have a generic function.
For possibly the same reason that the compiler can't peer into Omit enough to understand how it can join with Pick to form T, it cannot peer into a value of type Omit and figure out what T is supposed to be. So what can be done?
It's much easier for the compiler to infer the type of an actual value you pass to it, so let's try that:
function g(arg: T) {
return { ...arg, x: "" };
}
Now g() will take a value of type T and return a value of type T & {a: string}. This will always end up being assignable to A, so you should be fine to use it:
const okay = g({a: 1, b: "two"}); // {a: number, b: string, x: string}
const works: A = okay; // fine
If somehow you want to prevent parameters to g() from having an x property, that hasn't happened:
const stillWorks = g({x: 1});
but we can do it with a constraint on T:
function g(arg: T) {
return { ...arg, x: "" };
}
const breaksNow = g({x: 1}); // error, string is not assignable to never.
This is fairly type-safe, doesn't require type assertions, and is nicer for type inference. So that's probably where I'll leave it.
Okay, hope this novella helped you. Good luck!