Why does Google Search Console report that Googlebot can't crawl my client-side rendered site because it's blocked by robots.txt?
Client-site rendered SPAs make requests to the Bloomreach API to retrieve data to render in the site via a JSON API. API URLs should not be indexed by search engines since they could end up in search results instead of the consumer website, therefore Bloomreach blocks crawlers from accessing these API URLs via a robots.txt.
To avoid this issue, Bloomreach recommends using a SSR/SSG SPA or a pre-rendering service such as Prerender.io