You want to transform a.(b+c) into a.b+a.c so you write precisely this
s1/.a_.(b_+c_)->a.b+a.c
and that seems to change nothing, for no clear reason.
This is VERY common in trying to write transformation patterns, for a variety of reasons.
So you try writing precisely this
s1/.a_.(b_+c_):>a.b+a.c
and that seems to change nothing, again for no clear reason.
So you use the standard trick of looking at the FullForm of your expression
FullForm[mGamma.(G.w[k] + H.et[k])]
to try to guide writing your pattern and you write precisely this
s1/.Dot[a_,Plus[b_,c_]]->a.b+a.c
or this
s1/.Dot[a_,Plus[b_,c_]]:>a.b+a.c
either of which which do seem to work.
You might later accidentally discover, as I did, that this perhaps almost imperceptibly different input
s1/.a_ .(b+c)->a.b+a.c
actually does work. I leave how and why that differs and why it works as a small puzzle for you. Note: This is not because I have used any oddball characters from some palette or chosen any of the thousands of almost identical looking characters, but Mathematica sees as different, or done any invisible desktop publishing to this. Each of those are simple, plain, ordinary characters exactly as used above.
Unfortunately the forum posting software almost ruins the puzzle, because the typesetting here makes the difference much more obvious than when it is written in a notebook. Note: That is intentionally not put inside a code box. In a code box the difference is made even more obvious than as shown here.
The process of trying to get pattern substitution to work is often much like this.