SES vs VS Code vs something else

Hello, 
in our firmware development department, we are migrating from nRF5SDK to nRF Connect SDK. So we learn Zephyr and so on. Previously we used SES Nordic Edition. But now we see that Nordic actively evolve VS Code and plugins for it. So the question is, which IDE is more perspective for Nordic? Will you continue to develop SES Nordic Edition? What is Nordic's plan in this regard?

Thank you.

Parents
  • SES is head and shoulders better than VS code environment even now (late 2023).  There has been a lot of cleanup on the integration and it's getting better BUT there is a very long way to go papering over the atrocities of Zephyr.

    DeviceTree replacement still needs vast amounts of work to make it even sort of sane.  The basic problem here is that Zephyrs fundamental view of hardware is flat wrong for embedded programming -- they are big iron people attempting to do small iron and failing miserably at it.

  • Hello,

    When I first made the move from SES to VSC roughly two years ago unequivocally I shared your sentiment about SES being miles better, but as I have got familiar with VSC and Zephyr I am now of the complete opposite opinion - if I may say, the nRF Connect SDK is now definitively a large step forward from the nRF5 SDK.
    I will admit that it at first was very daunting to familiarize with Zephyr and the devicetree, but once you overcome this initial hurdle I think you too will appreciate all that the nRF Connect SDK brings to the table for developers.

    I would also like to emphasize that we, Nordic, are among the primary contributors to the Zephyr project, and I like to think that we are pretty well-versed in 'small iron' things, if this means close-to-metal embedded application development. However, we also recognize that as our SoCs are getting more complex developers will benefit from not having to take a bare-metal approach, which is why we have taken steps to advance our SDK in this direction.

    Is your primary issue with the nRF Connect SDK related to Zephyr being the foundation of the SDK, or with how source control is implemented in the SDK?

    Best regards,
    Karl

  • This is a two part problem for me so let me comment on this in two parts:  On the Zephyr side, I haven't been able to look inside the kernal yet so I can't say anything about the structure of it really.  My beef is about it's world view.

    First off, let me posit this: if you aren't deeply concerned about the hardware you are running on, you aren't doing embedded code.

    From that my opinion is that the entire hardware viewpoint of zephyr is clearly big iron.  The idea that you're going to move a design from board to board precludes the above posit.  Just doesn't happen without changes to the application structure itself.  If you don't deeply care about what you're running, you aren't doing embedded code.

    So let's take this deeper: I don't give a rats ass about how many boards Zephyr supports; it doesn't support my board.  Therefore, there are no deltas from some other board, there is only my board. Zephyr idea that everything is deltas from some reference board (or worse from a set of boards) is an indication of this big iron take. 

    Further, since I deeply care about what hardware I'm on, I want to keep ALL my configuration tightly controlled with my source code (basically AS PART OF my source code, even conceptually).  Hardware deeply matters! Code is an extension of hardware.

    Also, if you take a look at good IDEs concerning hardware, device tree is awful. My Gold Standard for IDEs is Cypresses (not Infineon) PSoC Creator 4.4.. hands down THE best there is in the embedded space.  Shows all the CPU configurations available and allows you to set it up correctly there along with showing you things that conflict. Peripherals are all set up the same way.  You generate all the configuration files (.h and .c) this way and it places them into the make directory for you.  Simple.  Simple.

    Edits to drivers are easily made from there and the generator respects those and it's all in the source tree.

    Another is MCUexpresso along the same lines although not quite as good.  Renesas E2Studio is similar although it goes a little deeper adding threads and some other upper layer configurations and is actually too cooked in the drivers for my taste (if you do too much in a driver, then it is likely that the driver isn't going to work in your application exactly).  The mark of good code is when it is used in ways that the creator never envisioned.

    Device tree is basically an intermediate configuration file that is missing the upper layers of IDE.  Nordic is starting to get there with the device tree editor in VScode but has a long ways to go yet to get to this level of good. 

    I'll wait for it, but the status quo is terrible yet.

    From what I can see about the SDK itself, yes, things are getting better BUT a person should note that sometimes it's a lot easier to just configure things from the init() call.

    And, since you're involved in Zephyr I'll note that in most applications a person sets things up and runs that way BUT I've got applications where I change the configuration of the hardware on the fly depending on context of what's going on at the time.  Again, if you don't deeply care about the hardware you're on, you aren't doing embedded code. 

    That's the difference between little iron and big iron. It's not the size of the iron, it's the seamless transition from iron to firmware.  (I particularly like to inhabit microcode myself)...

  • Part 2 of this: VSCode and the nature of coding

    My code has to live 15-20 years.  That means that it will likely outlive any cloud company. I have to be able to pull that code base up at pretty much any time and recreate it exactly.  By project (see also  How to set up VScode for eternal build).

    CSVode is basically built in the Zeitgeist of change is good. Nothing lasts more than 6 months; everything should be the latest and greatest even when it completely breaks what you're doing. It self updates.  It depends on a tangled web of whatever and sorts it all out when you build it (that's a good thing if you have a tangled web, but having a tangled web is itself bad)

    Basically the problem is that I want everything to build a project in the project so when I commit it, someone else can pull that down and build it exactly.

    The basic philosophy of VScode is that all your ancillary things should reside in github repos somewhere else and everything is a layer over everything else (recursively).  Simple and Microsoft are not words that are juxtaposed and that entire mindset is infecting the world of coding.

    All the other IDEs run simple; the code tree is in front of you (for the most part) and the tool chain is what you're pointing at (and you can have multiple tool chains). I go to the top of the tree and commit it.  Everything I need including library source (where possible - softdevice I don't get source but the OBJ is there), is in that commit.  Simplicity is a virtue.  It's not even a concept at MS. Don't fall into that trap; Simplify, simplify, simplify.

    And the last thought that I'll leave you with: Things that are designed to be all things to all people usually end up being nothing to anybody. The notion that there are 1500 people working on Zephyr isn't a compliment in my book.

  • Bonus part 3:

    This newer SDK is probably a step forward from SDK5 in structure to some extent.  I've gotten slightly to look at internals of Zephyr and there are some things it's capable of that are interesting.  I've got a equipment that  labels bottles that has some issues: The good news is that everything is configurable. The bad news is that everything is configurable.

    It would, in my book, be better if the SDK were divorced from the actual ROTS under it.  There are better RTOSs around than Zephyr.   Certainly MUCH better configuration systems available. MUCH.  If Nordic wants to put in the effort to gloss over the atrocities that Zephyr presents, that's a huge step forward and probably more productive than making it run on any RTOS. 

    That's a tall order. There's a lot gloss over.

    Source control can't be divorced from context control.  I'm trying to figure out how to do context control.  MS doesn't believe in it which makes your job that much harder.

  • Hello,

    Thank you for sharing these perspectives, we appreciate the feedback! :) 

    Randy Lee said:
    My beef is about it's world view.

    I sense that the benefits of the Zephyr foundation is not something that I can easily sell you on, but I will attempt to at least provide some insight from my own perspective.

    I agree that the added level of abstraction is taking us further away from the hardware itself, but I would not say that this neither a step in the wrong direction, nor is taking us away from 'the embedded programming' aspects.
    The extra abstraction level also enables a much higher degree of reusability between similar products, and allows customers to spend less time 'reinventing the wheel' when they start up a new project.

    Regarding the devicetree, I recognize your assessment that it is very cumbersome to familiarize with - especially before the recent improvements with the Devicetree GUI option - but I would like to emphasize that once you are familiar with it the benefits far outweighs the cons. One thing we often see is that customers would start out with one project, before making a second or third version later on which new functionality, for instance.

    Another example is for instance that during the recent silicon shortage this also turned out to be a big strength for customers that were not able to get a hold of large enough lots of the specific chips in their developed projects, which then easily could swap to different versions of our chips that were available, to stay in production.

    Regarding your assessment on the structure and distribution of the nRF Connect SDK - being 'spread across many different repositories' - I would argue that this too is a large advantage compared to the distribution of the nRF5 SDK.
    While the code is pulled from different repositories it is still all available at your computer, locally, at the time of build - and so you should not have any issue storing the code locally if you prefer to.
    I.e you can still apply the same process/storage to the nRF Connect SDK based application as you could with your previous nRF5 SDK based application.

    Lastly, while it would be possible to 'divorce the SDK from the RTOS foundation' I am not sure if this would serve any practical improvements when compared to the alternative.
    Additionally, if we were to make it configurable to a swath of different RTOS's, would that not then just add yet another level of abstraction?
    Perhaps I have not fully understood your concern or feedback about this part.

    Thank you for taking the time to write out all this - I really appreciate the perspective! :)

    Best regards,
    Karl

Reply
  • Hello,

    Thank you for sharing these perspectives, we appreciate the feedback! :) 

    Randy Lee said:
    My beef is about it's world view.

    I sense that the benefits of the Zephyr foundation is not something that I can easily sell you on, but I will attempt to at least provide some insight from my own perspective.

    I agree that the added level of abstraction is taking us further away from the hardware itself, but I would not say that this neither a step in the wrong direction, nor is taking us away from 'the embedded programming' aspects.
    The extra abstraction level also enables a much higher degree of reusability between similar products, and allows customers to spend less time 'reinventing the wheel' when they start up a new project.

    Regarding the devicetree, I recognize your assessment that it is very cumbersome to familiarize with - especially before the recent improvements with the Devicetree GUI option - but I would like to emphasize that once you are familiar with it the benefits far outweighs the cons. One thing we often see is that customers would start out with one project, before making a second or third version later on which new functionality, for instance.

    Another example is for instance that during the recent silicon shortage this also turned out to be a big strength for customers that were not able to get a hold of large enough lots of the specific chips in their developed projects, which then easily could swap to different versions of our chips that were available, to stay in production.

    Regarding your assessment on the structure and distribution of the nRF Connect SDK - being 'spread across many different repositories' - I would argue that this too is a large advantage compared to the distribution of the nRF5 SDK.
    While the code is pulled from different repositories it is still all available at your computer, locally, at the time of build - and so you should not have any issue storing the code locally if you prefer to.
    I.e you can still apply the same process/storage to the nRF Connect SDK based application as you could with your previous nRF5 SDK based application.

    Lastly, while it would be possible to 'divorce the SDK from the RTOS foundation' I am not sure if this would serve any practical improvements when compared to the alternative.
    Additionally, if we were to make it configurable to a swath of different RTOS's, would that not then just add yet another level of abstraction?
    Perhaps I have not fully understood your concern or feedback about this part.

    Thank you for taking the time to write out all this - I really appreciate the perspective! :)

    Best regards,
    Karl

Children
  • Abstraction is useful and a curse.  The problem with cooked things is that sometimes it's too cooked or not in the correct genre (I don't want Mexican if I'm looking for Cajun).  If you are doing things in the grain of whoever did whatever, and it just works, that's fine.  Lord help you if any of those requirements are not true.  Example: I2C drivers.  If I've got an I2C bus and a bunch of peripherals on it that only a single thread accesses, then I can do this pretty straight forwardly.  If I've got two threads accessing that bus (probably different peripherals), then I need to guard the I2C drivers with semaphores and timeouts and a whole 'nother layer of abstraction.  Unless you've got some pretty fancy code parsers, only I know if I need that level of abstraction or not.  If I don't, I don't want it in the code path as it adds complexity (read: bugs) and time to the driver calls.

    Let me give you another example of overly abstracted: I have a silicon vendors set of code that runs several layers of code to implement a touch screen all the way down to a touch controller. I've had to rewrite that deep driver now 3 times because of parts obsolescence, the parts getting dumber all the time so I've had to simulate all kinds of interesting things in the driver.  The driver is named for the original part but I can't change that because the upper layers know directly about it and I can't get at those layers to correct their erroneous viewpoint.  So I live with code that's ugly (which bothers me to no end).

    As to changing out hardware under you: I've had to do that during the chip shortage too.  The difference here was that I had to do it with good tools available.  Let's go back to my Gold Standard Tool: Cypress PSoC Creator.  I couldn't get a chip so changing to another similar one was 3 clicks: one to select the tool, one to select the part number and the third to build it.  Boom.  I had to do the same sort of thing with a different package.  That took a few more clicks because I had to rearrange the original pins to the new target pins, one click per pin and then a build click.  Boom.

    Basically Device Tree is like XML -- it's an intermediate file, no one should be editing it directly.  We're only doing that because the real tools haven't been built yet.  The Device Tree GUI is a great step in the right direction but it needs to go far enough to completely obsolete any editing what so ever of device tree files.  And drop the idea that any given board is a derivative of another one.  They aren't, even when cloned from one application to the next.  Firmware is an extension of hardware and all that needs to be kept together in my single repository on my end.  All of it.

    To that point, having all kinds of places to grab a bits and pieces of the SDK only makes it much tougher on everyone including your product support; if there isn't a "version" I have no idea how you'd debug that from a support perspective as you then have large (if not essentially infinite) numbers of versions around.  Simplification through standardization is a very important thing there.

    As to divorce from the RTOS, I'll note that SDk5 is; it doesn't even need an RTOS under it (although it really aught to have one once you're into things like BLE), so this is an architectural question of the SDK.  Can it be done? Certainly, Nordic has done so prior.  Should it be done? Harder question.  I just wish you'd picked a better OS to work with like FreeRTOS or MQX or something.  I will say that it's rather a moot point since this train has left the station already so I'll have to suck it up and deal with it but I have to rest my hope on the notion that Nordic will continue to significantly improve it's good start of actual tools to deal with this.  Cypress is my Gold Standard on this. If you can get to that level, things will work out fine even with the Big Iron mindset of Zephyr.

    Of course, once you've obsoleted the underlying stuff, then, of course, the thing to do is to remove it and get back to simple. Collapse layers.  Simple is a blessing in all code. As long as it's not too simple.

  • Hello,

    Randy Lee said:
    The problem with cooked things is that sometimes it's too cooked or not in the correct genre

    I see your point, but I would here argue that the added configurability of the nRF Connect SDK works to alleviate this, but allowing you to configure it - whichever part - to align more closely with what you desire, and I think the addition of semaphores to your multiple threads that access the same resource is a small price to pay for this flexability.
    Additionally, if you had wanted to avoid this all together you could even have set up a single thread to run all your I2C peripheral transactions.

    Randy Lee said:
    Let me give you another example of overly abstracted:

    I acknowledge that there is definitely a risk of going 'too far' with the abstractions, or not doing it correctly such as in the examples here, but I do not think the nRF Connect SDK is headed in this direction - as you mentioned in your earlier comment there are a lot of engineers contributing to the Zephyr project, and so there are strict guidelines for what you could do, and how it should be done, and this is one of the places where the added benefits from the open-source nature of the Zephyr project really shines.
    In your example it sounds like the vendor has neglected to maintain this part of the code, which is a failure on their part, and not something that easily can be influenced by an end user, in the Zephyr project however you can easily highlight the issue through opening a git issue to have it looked at by their developers, or a PR with the necessary fix if you have already made one, to have this resolved at the root instead of having to make a patch for it in your local version of the SDK.

    Randy Lee said:
    Let's go back to my Gold Standard Tool: Cypress PSoC Creator.

    It definitely sounds like their tool rose to the occasion in this situation, but if I could ask, does it also limit you to their specific product range?
    I.e, can you just as easily change to a SoC from a different vendor, within this tool?
    I ask because I am not personally familiar with the Cypress PSoC Creator.

    Randy Lee said:
    To that point, having all kinds of places to grab a bits and pieces of the SDK only makes it much tougher on everyone including your product support; if there isn't a "version" I have no idea how you'd debug that from a support perspective as you then have large (if not essentially infinite) numbers of versions around.  Simplification through standardization is a very important thing there.

    This is not quite accurate - we use the west tool to track which exact state of a repository that goes into a release or tag. This way, you can be absolutely sure that the code you get when you download an SDK version will be the same each time, regardless of the different repositories states on main.
    Thus, there only exists a single 'nRF Connect SDK v2.5.0' for instance - because it draws from all the repositories at specific commits, and so there can be no surprises there.

    Randy Lee said:
    Should it be done? Harder question.  I just wish you'd picked a better OS to work with like FreeRTOS or MQX or something.  I will say that it's rather a moot point since this train has left the station already so I'll have to suck it up and deal with it but I have to rest my hope on the notion that Nordic will continue to significantly improve it's good start of actual tools to deal with this.

    While it does not need to be an RTOS under it, I definitely think the benefits far outweigh the costs. There were many elements that went into the consideration when we went for the Zephyr project as the foundation of the SDK, and one of them were their open-source approach. We dont want our customers to have to spend time re-inventing the wheel when they switch between different vendors. We wish to 'compete on level ground' with other vendors instead of contributing to creating hardware/software lock-ins for companies and developers because we believe our products are better suited for their needs.
    That said, this also requires that we can not rest on our laurels, and we must continuously work to improve our offering, both in terms of tools and in terms of product performance, to stay competitive.

    Best regards,
    Karl

  • This is getting long so let me break this up just a tad..

    In my second example of something being too cooked here, the vendor in question had support for 2 chips, both of which obsoleted.  That they had any support for those chips is odd but there you go.  That being a silicon vendor I shouldn't expect that they had any support for anything at all, so writing my own drivers for a chip they didn't support was to be expected.  In this particular case the chips that I had to use eventually were a lot dumber than the ones they originally used so I had to do some interesting things to simulate what was required of the rest of the stack.  But this example is a simple one.  It's pretty linear and well defined.

    Now suppose I have an accelerometer.  You can get those in 2,3,6 and 9 degrees of freedom. You can get those with al kinds of capabilities of how much the can process the data stream before talking to you (and even how they talk to you) and there are those devices that can talk to another stack of devices to come up with extra data to play with.

    The combination is, for all purposes, infinite.  Furthermore if any of that changes on the board, it's very likely that the structure of the upper layers will have to change depending on exactly how the hardware processes things and under what conditions.

    Therefore, the idea that it's even possible to run drivers for exactly what combination of chips and configurations is rather a bit overreach.

    To put a finer point on it, to my way of thinking the OS shouldn't even care about peripherals other than things like the clock tree and the NVIC operation.

    That said, if there are people who want to have drivers for something, that's fine.  Standards are what it important there so that things work pretty much the same way more or less (unless they just can't or shouldn't -- I'll cover that in another section here).

    I don't expect that someone would have what I'm after so I just assume I'll have to write whatever I need.  Not a big deal as long as I have the underlying drivers for the SoC peripherals there.

    Which brings me to the other major point here (continued)

  • Let me talk about PSoC creator:

    This is a special chip that functions more like a gate array with an ARM code in it.  That tool won't run on anyone else's SoC because, well, there aren't SoCs anything like it other than e.g. Xlinix.  I can do things on that hardware that flat can't be done on other Hardware.  The idea that you're going to be able to take and change vendors on an SoC and keep on trucking strikes to the heart of my assertion that this is a Big Iron viewpoint: again, if you don't deeply care about the hardware you're on, you aren't doing embedded.   What you are doing is headless desktop code and that's completely different.

    Let me give you a historical example: Digital Equipment Corp (DEC).  We always used to quip that their OSs proved that they were a hardware company.  DEC hardware (PDP11 series) could do some really cool things.  Really cool things.  The OSs never dealt with it so that was completely wasted on it.  Oh, sure Unix could run very nicely on it, but doing real time was something else.  I *did* see someone doing a LEM simulator with one that was mighty impressive that was obviously not using DEC OSes to run it.

    My point here is that just because some SoC has an ARM core in it doesn't mean that the SoC from vendor A is anything like the SoC from Vendor B.  And the nature of that is what drives the application structure and therefore code.  I select Nordic because of the crosspoint switch kind of interconnects and because of the very low power it does.  Some of the peripherals that are on there are better done in other vendors and therefore, if I need what another vendor does on a particular peripheral, I'll need to use that other vendor to implement things.  There is no changing SoC in that kind of code that relies on the underlying hardware (see previous note on embedded).

    The other point here is that there is a level of having too cooked a driver that impacts embedded stuff. If there is too much overhead in the driver that isn't needed *for the application* then that driver might be taking too long to do things when I've got time critical things to do.

    Take a simple pin toggle: That aught to be at clock speed, not a call to some driver to play with the pin. This should take nS, not uS.  Same for pin state checking.  These need to be implemented as Macros for the SoC that is being used (different SoCs use pin differently), not as a function call at all.

    The same can be said for my I2C example above.  If I need response times that are measure in uS, then having all the other crap in there is a problem.  Many years ago (like 50), I was taught this lesson using DEC hardware; what they labeled as an RTOS (for those of you who remember RT-11) wasn't responsive enough in the application the engineer was attempting. 

    The same goes here: Simple is better; Adding complexity removes you from real time eventually.  And with 1500 people working on a project, you'll get there real fast.

  • " We don't want our customers to have to spend time re-inventing the wheel when they switch between different vendors. We wish to 'compete on level ground' with other vendors instead of contributing to creating hardware/software lock-ins for companies and developers because we believe our products are better suited for their needs."

    Again, this is a Big Iron viewpoint.  Every vendor has a different take on the SoC architecture and peripheral sets. If you're code isn't taking advantage of it, you probably aren't doing embedded.  The idea that you can take an application and move it to another architecture (the CPU isn't all that important at the same bit depth) defeats your argument that Nordic (or any other chip vendor) wants to compete on a level playing field.  If it was truly level, then the technology under it would be meaningless and so a commodity.  They only thing you'd have to compete on would be price and delivery; a losing proposition unless you're the 800# gorilla in the space.

    There isn't any such thing as moving one vendor to another without lock in.  Doesn't happen in reality. Moving is always a pain because the entire intrinsic structure of the application depends on the underlying hardware architecture.  If it doesn't, then you aren't doing embedded, you're doing headless desktop,

    Which some of these applications, in fact, are.  Don't get me wrong.  But that's a Big Iron application, not a Small Iron application.  Don't confuse those.  Again, things that attempt to be all things to all people generally don't succeed in either.

    Best Regards,

    rjl

Related