Variance-mean relationship
A difference between the use of a gamma family and a normal family will be the relationship between mean and variance.
For a normal family, the variance of the outcome variable is considered constant and doesn't change when the mean is different.
This occurs in many situations. It happens when the distribution is due to additive errors, which don't change for a different mean of the distribution.
For a gamma family the variance is considered to scale with the square of the mean. E.g. for conditions where the mean is $q$ times larger, the variance will be $q$ times larger.
This is as if the mean is a scaling parameter. (if you scale a distribution, then you get that the mean increases linearly with the scaling and the variance scales as a square of the scaling).
Link function
In addition, the gamma family will use a inverse link function as default. That will change the relationship between your parameters and the mean
$$E[Z]_{\text{given $x$}} = \frac{1}{a+bx}$$
instead of
$$E[Z]_{\text{given $x$}} = {a+bx}$$
The link function is not essential to a family and can be changed. The default relates to the canonical link function.
Often the canonical link function is also a practical transformation. Ie. when the relationship between a parameter and the mean is like the link function. Logistic regression is an example. Besides being the canonical link function, it also occurs in many different ways as a practical function or due to some underlying principle (for the inverse link function, I believe there might also be something practical or logic behind it, but it is not on the tip of my tongue).
Which is best
Choose the model that is suitable. The model that you believe describes the data the best.
The GLMM might have one disadvantage , which is that it is nore computational heavy. The mixed effects (Gaussian distributed) will need to be combined with the errors (Gamma distributed) which doesn't have a nice analytical solution as in LMM.
The effect of using a wrong model is not always harmfull. The advantage of a better model is more accuracy (often the fit will be more efficient and closer to reality), but a low accurate model can still be useful.
Also, be aware that if the model has a low goodness of fit (if the assumptions about the distribution are bad), then inference with hypothesis testing will be incorrect. A hypothesis about some effect implicitly includes assumptions about the statistical model. So if we reject a hypothesis, then it is a rejection of the stated hypothesis about the effect and the assumptions. In practice, if the assumptions are reasonable and trusted to be accurate, then they are left out in the conclusions.