(1) t1 = t0+ x1/c
and the signal from transmitter 2 reaches the receiver at time(2) t2 = t0+ x2/c ,
where t0 is the time the signal is being sent out (assuming both transmitter clocks are synchronized), x1 is the distance of the receiver from transmitter 1, x2 the distance of the receiver from transmitter2, and c the speed of light.(3) x1-x2 = c. [t1-t2].
One knows therefore the position of the receiver just by comparing the time signals from the two transmitters (the receiver clock is completely irrelevant).(4) x1-x2 = c.[(1+ε).t1 -(1+ε).t2] = c.(1+ε).(t1-t2)
which means that the position will simply be wrong by a relative factor ε, but there is obviously no accumulation as the transmitter clocks run at the same rate relatively to each other (assuming that all satellites have identical heights and speeds, i.e. identical relativistic time dilations). Now the quoted relativistic correction of 38 microseconds/day corresponds to ε=4.4.10-10. As the satellites are at a distance of around 20000 km (=2.109 cm), the positional error due to relativity should actually only be 4.4.10-10 . 2.109 cm = 0.8 cm! This is even much less than the presently claimed accuracy of the GPS of a few meters, so the Relativity effect should actually not be relevant at all!