But it doesn't change anything and doesn't add to the problem that occurs here.
As described in this thread, projects with both screen switching methods crash, so that's not the problem here.
However, for me on android all projects work fine and there is no "blank screen" when switching them. So either there is something wrong with companion for ios or something wrong with this project that companion ios is allergic to.
So explain to us why the project works fine for me in companion and not for him...
Why your test project works fine and changes screens for him. Why my test project "start value" also works fine for him and opens screen2. In contrast, its project does not open screen2. In my opinion, switching screens was never a problem in the companion, it always worked.
There is a known bug we're trying to track down involving screen switching with values on iOS. I was able to replicate the problem but even with additional debugging turned on I have yet to determine why this project fails to switch screens correctly. The code comes over from App Inventor just fine but it may be failing in the initialization step, which would actual create the UI and then run Screen2's Initialize event.
So, as per my link, App Inventor has a Screen Switching problem on iOS, you cannot switch screens with a Start Value.
All is not lost though:
You can save the value to TinyDb in Screen1, then read that value in Screen2
OR
You can use Virtual Screens instead, as most of us do:
When we define virtual screens, we use one 'real' App Inventor Screen (most often Screen1). Screen-sized Vertical Arrangements on it are displayed/hidden as required - they are the Virtual Screens. This is generally a better approach for multi-screen Apps, they share data without having to "pass" it between screens and it also reduces code duplication, making the App more efficient and the code easier to follow if you have to return to it at a later date. So, instead of separate "houses", virtual screens are "rooms" of the same "house".
In general, switching screens with the start value works, because my test aia indicate it. The start value has been correctly transferred to screen2. There must be more to what was in his project.
However, there is no reason why the screens should not be switched programmatically with Companion. And there was no problem / error even after switching the screens several times.
In short: there is no reason to switch screens manually with Companion.
There were multiple race conditions* involved in this bug.
The first involves the screen transition in the companion app and the transition in the editor. Unlike in the Android version, we actually leverage the built in functionality of iOS to represent the view stack. This is particularly important because Android (at least up until Android 13) always provided a back button either as a soft key or in some phone as a physical button, so without making use of the UINavigationController the user would have no way of naturally transitioning back in the screen hierarchy like they do on Android. The second part of this is that iOS has a sequence of events that occur during the transition to the new screen, and we chose to make the new screen officially active in the viewDidAppear: callback, and for the most part this has been fine. Independently, a message is returned to App Inventor to let it know that it should switch to the newly opened screen, which it does, and then it sends the code to actually draw the screen back to the companion app. In the older legacy mode, there was enough of a delay here that the UI transition on the device had completed before the code to draw the screen was received, so all was good. However, with the transition to WebRTC it can sometimes be the case that the code to draw the screen appears before it activates, so the previous screen gets drawn over and the new screen (which is blank by default) appears on top of it. This is one potential scenario leading to the blank screen phenomenon.
The second race condition is conceptually tied to this but slightly different, and is the one we're seeing in this thread. When an event is raised in App Inventor to run the user's blocks, a check is performed called canDispatchEvent. If the event can be dispatched, then the Scheme code for that event block is run. However, in the iOS version we made a premature optimization where if the event is run we also make the Screen active, which turns out to be wrong when dealing with events during screen transitions. In particular, when using this app in its normal flow, it's highly likely that one of the two textboxes has focus when the button to compute the BMI is clicked. Clicking the button starts the transition to Screen2 (as described above) but then the textbox fires its LostFocus event, which swaps out the newly created Screen2 object for Screen1, at which point the code now arrives from App Inventor to draw the contents of Screen2 but they end up on Screen1 and Screen2 appears blank. However, if you click the button without having focused the textbox everything works as expected.
Both of these race conditions should be fixed as of 2.64.2 build 7 on TestFlight.
* A race condition here is a term of art to describe two or more processes running in parallel where we expect the same outcome regardless of which process finishes first, but the actual outcome ends up depending on the order the processes finish (such as in a road race).
I also saw this error when testing with the version in the App Store, but I have been unable to replicate it with the version in TestFlight. Which version were you running when you observed that error?
I can understand why this would be useful. In particular, if there is a lengthy sequence of steps to get deep enough into the app to perform testing it can be helpful to have ways to shortcut all the intermediate state. Old video games for example used to have cheat codes or other ways to jump to different levels for testing.
For example, if I'm fixing a bug in App Inventor, every time I need to rebuild the companion, load it onto my device, then scan the QR code to connect up to the project that triggers the bug. Sometimes it would be nice to just compile and then run the test project but we don't have an apparatus for that yet (and then of course you'd still have to simulate whatever inputs are required to trigger the bug, if any).
Sorry, but I don't understand where the difference should be, whether I switch to a screen manually or e.g. via a button within the app.
If certain procedures (events) should not be executed on the newly opened screen (for test purposes), I will have to disable them beforehand anyway. To do this, I would first disconnect Companion, make the adjustments on this screen and only then connect to Companion.