My version of loading multiple files into the context allows the UI of ChatGPT to load the files on his own, when it decides that it needs/wants to read the content of the file. And from some experimentation it would seem that each file it loads, it considers it as a separate interaction. Thus the token limit is much less of a problem, allowing to load larger pieces of code - either the whole thing, or piece by piece. https://github.com/kesor/chatgpt-code-plugin