I think I can do some quantitative thought-exercise analysis here. Suppose we have systems A and B, each consisting of a black hole with stars orbiting in close where it is gravitationally dominant. Let B be 10 times as far away as A, but the observed mean orbital velocity is the same at a given angular separation from the central body. Since this is 10 times as much linear orbital radius for B, the centripetal acceleration is 1/10 as great. This is consistent with a central mass 10 times that of A. The event horizon diameters will be in this same proportion, so their angular diameters will be equal. So far, the two systems look alike.

If by velocity profile you mean the orbital velocity as a function of orbital radius, that will be inversely as the square root of the radius for both systems. They still look alike, so with just the data given so far we have no means of determining the distance.

Now let us consider the tidal stress on a star at a given angular separation from the center. That varies in proportion to the central mass and inversely as the cube of the linear separation. So yes, A would be rougher than B on a star at a given angular separation. The challenge would be to use the telescope of our dreams to observe stars close to the centers and look for signs of disruption. If we can do that, we can get more direct distance indicators from statistical analysis of the apparent magnitudes of the stars, as we have been doing for many decades.

If I am missing something in your line of thought, please let me know.