Why I detest LabVIEW

Because my robot’s control system runs on a LabVIEW real-time machine, I have no recourse but to add new features in LabVIEW. Oh, I tried coding new stuff in C++ on another computer and streaming information via UDP over gigabit, but alas, additional latencies of just a few milliseconds are enough to make significant differences in performance when your control loop runs at 2 kHz. So I must code in LabVIEW as an inheritor of legacy code.

With a computer engineering background, I find that having to develop in LabVIEW fills me with dread. While I am sure LabVIEW is appropriate for something, I have yet to find it. Why is it so detestable? I thought you’d never ask.

  • Wires! Everywhere! The paradigm of using wires instead of variables makes some sort of sense, except that for anything reasonably complex, you spend more time trying to arrange wires than you do actually coding. Worse, finding how data flows by tracing wires is tedious. Especially since you can’t click and highlight a wire to see where it goes – clicking only highlights the current line segment of the wire.  And since most wires aren’t completely straight, you have to click through each line segment to trace a wire to the end. [edit: A commenter pointed out double clicking a wire highlights the entire wire, which helps with the tracing problem]
  • Spatial Dependencies. In normal code, it doesn’t matter how far away your variables are. In fact, in C you must declare locals at the top of functions. In LabVIEW, you need to think ahead so that your data flows don’t look like a rat’s nest. Suddenly you need a variable from half a screen away? Sure you can wire it, but then that happens a few more times and BAM! suddenly your code is a mess of spaghetti.
  • Verbosity of Mathematical Expressions. You thought low-level BLAS commands were annoying? Try LabVIEW. Matricies are a nightmare. Creating them, replacing elements, accessing elements, any sort of mathematical expression takes forever. One-liners in a reasonable language like MATLAB become a whole 800×800 pixel mess of blocks and wires.
  • Inflexible Real-Estate. In normal text-based code, if you need to add another condition or another calculation or two somewhere, what do you do? That’s right, hit ENTER a few times and add your lines of code. In LabVIEW, if you need to add another calculation, you have to start hunting around for space to add it. If you don’t have space near the calculation, you can add it somewhere else but then suddenly you have wires going halfway across the screen and back. So you need to program like it’s like the old-school days of BASIC where you label your lines 10, 20, 30 so you have space to go back and add 11 if you need another calculation. Can’t we all agree we left those days behind for a reason? [edit: A commenter has mentioned that holding Ctrl while drawing a box clears space]
  • Unmanageable Scoping Blocks. You want to take something out of an if statement? That’s easy, just cut & paste. Oh wait no, if you do that, then all your wires disappear. I hope you remembered what they were all connected to. Now I’m not saying LabVIEW and the wire paradigm could actually handle this use case, but compare this to cut & paste of 3 lines of code from inside an if statement to outside. 3 seconds, if that compared to minutes of re-wiring.
  • Unbearably Slow. Why is it when I bring up the search menu for Functions that LabVIEW 2010 will freeze for 5 seconds, then randomly shuffle around the windows, making me go back and hunt for the search box so I can search? I expect better on a quadcore machine with 8 gb of RAM. Likewise, compiles to the real-time target are 1-5 minute long operations. You say, “But C++ can take even longer” and this is true. However, C++ doesn’t make compiles blocking, so I can modify code or document code while it compiles. In LabVIEW, you get to sit there and stare at a modal progress bar.
  • Breaks ALT-TAB. Unlike any other normal application, if you ALT-TAB to any window in LabVIEW, LabVIEW completely re-orders Windows Z-Buffer so that you can’t ALT-TAB back to the application you were just running. Instead, LabVIEW helpfully pushes all other LabVIEW windows to the foreground so if you have 5 subVIs open, you have to ALT-TAB 6 times just to get back to the other application you were at. This of course means that if you click on one LabVIEW window, LabVIEW will kindly bring all the other open LabVIEW windows to the foreground, even those on other monitors. This makes it a ponderous journey to swap between LabVIEW and any other open program because suddenly all 20 of your LabVIEW windows spring to life every time you click on.
  • Limited Undo. Visual Studio has nearly unlimited undo. In fact, I once was able to undo nearly nearly 30 hours of work to see how the code evolved during a weekend. LabVIEW on the other hand, has incredibly poor undo handling. If a subVI runs at a high enough frequency, just displaying the front-panel is enough to cause misses in the real-time target. Why? I have no idea. Display should be much lower priority than something I set to ultra-high realtime priority, but alas LabVIEW will just totally slow down at mundane things like GUI updates. Thus, in order to test changes, subVIs that update at high frequencies must be closed prior to running any modifications. Of course, this erases the undo. So if you add in a modification, close the subVI, run it, discover it isn’t a good modification, you have to go back and remove it by hand. Or if you broke something, you have to go back and trace your modifications by hand.
  • A Million Windows. Please, please, please for the love of my poor taskbar, can we not have each subVI open up two windows for the front/back panel? With 10 subVIs open, I can see maybe the first letter or two of each subVI. And I have no idea which one is the front panel and which is the back panel except by trial and error. The age of tabs was born, oh I don’t know, like 5-10 years ago? Can we get some tab love please?
  • Local Variables. Sure you can create local variables inside a subVI, but these are horribly inefficient (copy by value) and the official documentation suggests you consider shift registers, which are variables associated with loops. So basically the suggested usage for local variables is to create a for loop that runs once, and then add shift registers to it. Really LabVIEW, really? That’s your advanced state-of-the-art programming?
  • Copy & Paste . So you have a N x M matrix constant and want to import or export data. Unfortunately, copy and paste only works with single cells so have fun copying and pasting N*M individual numbers. Luckily if you want to export a matrix, you can copy the whole thing. So you copy the matrix, and go over to Excel and paste it in and……….suddenly you’ve got an image of the matrix. Tell me again how useful that is? Do you magically expect Excel to run OCR on your image of the matrix? Or how about this scenario: you’ve got a wire probed and it has 100+ elements. You’d like to get that data into MATLAB somehow to verify or visualize it. So you right click and do “Copy Data” and go back to MATLAB to paste it in. But there isn’t anything to paste! After 10 minutes of Googling and trial and error, it turns out that you have to right click and “Copy Data”, then open up a new VI, paste in the data, which shows up as a control, which you can then right-click and select “Export -> Export Data to Clipboard”. Seriously?!? And it doesn’t even work for complex representations, only the real part is copied! I think nearly every other program figured out how to copy and paste data in a reasonable manner, oh say, 15 years ago?
  • Counter-Intuitive Parameters. Let’s say you want to modify the parameters to a subVI, i.e. add a new parameter. Easy right? Just go to the back panel with the code and tell it which variables you want passed in. Nope! You have to go to the front panel, right-click on the generic looking icon in the top right hand corner, and select Show Connector. Then you select one of those 6×6 pixel boxes (if you can click on one) and then the variable you want as a parameter. LabVIEW doesn’t exactly go out of its way to make common usage tasks easy to find.

Now granted, there are some nice things about LabVIEW. Automatic garbage collection [or rather the implicit instantiation and memory managing of the dataflow format, as one commenter pointed out], easy GUI elements for changing parameters and getting displays, and…………well I’m sure there are a few other things. But my point is, I am now reasonably proficient in LabVIEW basics and still have no idea how people manage to get things coded in LabVIEW without wanting to tear their hair out. There are people who love LabVIEW, and I wish I knew why, because then maybe I wouldn’t feel such horrorific frustration at having to develop in LabVIEW. I refuse to put it on my resume and will avoid any job that requires it. Coding in assembly language is more fun.

23 Commentsto Why I detest LabVIEW

  1. MRH says:

    Double-clicking a wire highlights the entire wire.

    Holding down CTRL and drawing a dotted box on the screen moves everything to create an empty space within that box.

    Most of the rest have solutions too.

  2. briancbecker says:

    Thanks for the tips! I am still not fond of LabVIEW, but I do welcome any ways to make life easier 😉

  3. John J. says:

    I feel your pain. I’ve been coding for 26 years and avoided labview at all costs. Now, I’m the labview admin for my lab and I truly hate programming in pictures. Put simply, labview is programming for people who don’t know how to program.

  4. Aristos Queue says:

    > Automatic garbage collection

    Just FYI, LabVIEW does not have garbage collection. Data is allocated when it is needed and thrown away when you’re done with it. The advantage of dataflow (or most functional programming language) is that you, the programmer, do not have to do explicit memory allocation declarations and you still get efficient use of memory, without the overhead of a garbage collection thread.

    As for the rest of your post, I am a LabVIEW R&D dev, so anything I say is obviously biased in favor of LabVIEW, but perhaps I can help out a bit. LabVIEW is written with a combination of C++, C# and the previous version of LabVIEW, so I am familiar with the comparisons between LV’s dev environment and VisualStudio.

    I do not expect my comments will make you love or even like LabVIEW, but perhaps I can smooth out some of the bigger problems you’re having.

    I will concede that you have a valid complaint about window management. There are techniques to ameliorate that, but none really fixes the problem. The Project Window is useful for larger projects, allowing you to pull VIs to the foreground easier. Ultimately, the windowing problem is just another variation of the tab problem you face inside Visual Studio — get enough files open and you spend for ever scrolling through the horizontal tab list, with source code control tabs, team explorer tabs, and all sorts of other stuff mixed into the list. LabVIEW puts its “tabs” into the window explorer, which in theory allows us to piggyback on the OS as it gets better about giving users ways to manage many open windows. This theory sometimes does not work in practice. 😉

    In general, your other complaints stem from two items: 1) VIs that have gotten too large and should be broken into subroutines and 2) a lack of knowledge of the full range of LV editor features.

    The subVIs that have gotten too large is at the heart of your wires complaint. This has much in common with the problems that begin to occur in text programming when you get a function that spills off the screen. You do a lot of scrolling up and down, you start accidentally naming variables the same name as variables that already exist, testability decreases, etc. It’s bad in any programming paradigm. LabVIEW just makes the messiness actually visible, which I tend to think is a nice thing since that viceral “ick” encourages refactoring.

    If you start breaking VIs down into smaller chunks, many of your editor performance issues will evaporate too. Ever notice how you don’t really have to stop your work to compile LabVIEW on the desktop? That’s because we compile bits and pieces as you go. As a VI gets bigger, our ability to hide that overhead decreases.

    The dataflow paradigm offers much more programming safety than procedural code when it comes to parallelism, and the reason we discourage the local variables is, yes, because of the memory overhead, but far more because the compiler no longer has information about what is dependent upon what for thread scheduling. LabVIEW’s compiler can do wonderous things with thread scheduling, but only if it can see an order to the dependencies. Variables take that away.

    The compiles for real-time that freeze you out are a different issue and one that I’m not able to speak to — not my area of expertise.

    The “limited undo” — you’ve been able to move the limit up for years in the Tools >> Options. About four years ago, we bumped the default up a lot. But your complaint about the open front panel — which does wipe the undo, just like closing the text file in VisualStudio wipes undo. The problem is not that LabVIEW prioritizes the updating of the front panel. The problem is that in order to update the front panel at all, we have to make a copy of the data as part of the execution. Normally, we do not preserve the value — the values returned from a subVI get used and possibly modified by the caller VI. The UI does update in a delayed fashion, but the extra data copy that must be done right then in order to even have a value for the UI to use later. This may not make you like the situation, but I hope it at least explains what’s going on and helps you understand we’re not complete idiots who are prioritizing the UI. 🙂

    Anyway, that’s a smattering of feedback. Maybe it helps, perhaps not. I’ve worked in Haskell, Pascal, C, C++, C#, JAVA, Lisp and, yes, BASIC over the last quarter century, and LabVIEW is my favorite programming language, though I freely admit it has its warts, just like any language/dev environment does. I figured I’d take a bit to maybe help you see past its flaws, because, frankly, dataflow is just too powerful an expression tool to put aside in a multicore programming world, and maybe this post will help you be able to use it more.

    At any rate, I wish you well, where ever your programming paradigms take you. 🙂

  5. briancbecker says:

    Hi Aristos,

    Thanks for your comments! I appreciate you taking the time to correct some of my misconceptions and provide a different perspective. I will admit that LabVIEW does have some awesome hardware integration and definitely eases deployment. My post was mainly a outlet for too many hours of frustration trying to get LabVIEW to do things that I could have accomplished in much less time using a more traditional language. And while I still prefer text-based languages, I’ll concede that my familiarity with C++/MATLAB/etc and lack of in-depth knowledge of the LV editor is definitely a contributing factor. 🙂

    Brian

  6. dustin says:

    The comments by aristos covered most of your listed problems with LabVIEW, but I want to make one more suggestion. Peter Blume’s “The LabVIEW Style Book” is a wonderful resource for learning to maximize efficiency, flexibility, and maintainability of LabVIEW code, and covers most of the problems you mention. Don’t know if anyone has recommended it to you in the past, but if you are forced to use LabVIEW, this would be a great book to add to your reading list. It won’t make you love LabVIEW, but if you follow the book’s many wise rules of LabVIEW style, it might make you hate it a lot less.

    Cheers,

  7. sachin says:

    I have done quite development with Labview and some development with the C. Here is some of my comments
    Problems I face With Labview :
    1. Big limitation while writing comments. In C just add any where every where. This helps lot in future reference.In labview being diagram based can’t write everywhere.
    2. while you yourself have evolved reading and writing the alphabets(not diagrams) your whole life, it is far more easier to associate yourself to words rather than blocks.

    above are the major limitations I am facing with labview

    Problems with C/Advantages of Labview
    1. requires hell lot of memory management in C. Not required in labview.
    2. All loops are auto Indexed in labview, if data is coming frm some where, no need to find beforehand how much is coming. try that in C!!
    3. very Less run time errors, lesser program crash. crashes frm problems like array indexing etc are unheard of in labview.
    4. availability of whole range of built in functions in Labview. this saves lot of development time.
    5. Easy to create GUI’s. In C half of your time will be wasted in creating menu’s and checkboxes.

    For a Non Computer Engineering background above advantages are of great use.

  8. alf says:

    The only reason why Labview is becoming more popular is cost (programmer cost, that is). It takes a minimum of 10 years to become really functional and comfortable programming in C/C++. However, if your code really maters ($$$), you should implement it in C/C++ (no free lunch). Please, save me the comments about why this last sentence is true.
    You can write fast-code, write code faster, or write code that is efficient; alas, you can only achieve only two of these at any given language.

  9. Ricardo Schmidt says:

    Most of the points you listed have a good reason to be this way.
    When you got more deeper in LabView programming and in all its shortcuts you understand and get used to it.
    The only thing that I would change is having tabs in the top for opened VIs and at bottom to switch between front panel and block diagram, so Alt+Tab will be used to switch between LabView and other programs, and Ctrl+E just to between FP and BD.

  10. Bob says:

    FYI: you can include C code into Labview by compiling your C code as a DLL (use DLLexport on function headings), and then use labview call library node. If you like C and are stuck with Labview (as I am) then that’s the way to go!

  11. briancbecker says:

    That’s a good tip for regular LabVIEW, but unfortunately things are a bit more complicated on the LabVIEW real-time target. I think it’s still possible, but you are quite a bit more limited.

  12. Synaesthete says:

    I’ve been a professional LabVIEW programmer for several years now working for a major integrator. I’ve also worked in a multitude of text-based languages. I completely agree with all of your points. The initial workflow of LabVIEW is very good for non-programmers working with instrument communication. However, as your skills advance, the efficiency curve for LabVIEW begins to level off much more significantly than if you were working inside a good IDE/text editor. I’m as efficient as is possible (I’m a LabVIEW Architect), and each time I have to create a new VI for something simple I kind of cringe–there’s an inherent ‘blip’ in your programming workflow each time you want to create the equivalent of a new method. You have to constantly pay attention to several parameters related to effective coding style at once, and even with the best style guidelines (I have read Blume’s book cover-to-cover at least twice), code becomes less manageable with project size. Also, with text-based languages, the “cleanup” process is far simpler–with LabVIEW it’s a real headache.

    The basic design of the LabVIEW UI is very rapidly being outshone by other editors on the market (it has been behind for a good 8 years now). Adobe’s latest set of tools for graphic design are reaching an incredible level of ‘mental ergonomics’ and smooth workflow. Simple text editors such as “Sublime Text 2” have also made huge strides with text editing. Furthermore, earlier in my current project I decided I’d switch editors entirely from Eclipse to Sublime Text 2 and it took me a single afternoon. With LabVIEW, it’s a major effort just to switch a project between versions–and alternative G editors don’t even exist.

    With the rapid maturation of text editors, evolution of syntax, language interoperability, and universal cross-industry acceptance of text-based programming, I don’t see LabVIEW lasting more than a couple more years. The continued adoption of LabVIEW I believe has a lot more to do with efforts by NI to support their hardware and market LabVIEW to non-programmers. I believe this is not a sustainable strategy for NI, who should have their eyes on better support for their hardware in text-based languages. A great instrument control and data acquisition library accessible to programmers who use something like Python would really satisfy people’s needs far better than LabVIEW in the coming years. Additionally, as text-based programming becomes more widely taught from the highschool level up, even the value of LV as a non-programmer’s programming language is losing ground as a valuable market differentiator.

  13. Long time labview user says:

    Your complaints probably stem from the fact that you’re very efficient in a text based environment, and not nearly so efficient in the graphic environment of LabView. Here’s my point by point

    Wires Wires: The “too many wires are hard to trace” complaint is silly, that’s what subvi’s, clusters and classes are for. I tend to architect my applications around a set of classes that are all in an inheritance chain. This limits the number of wires possible for any particular SubVI because it can only see what’s in its class. Yet a single wire can contain N parameters.
    Spatial Dependencies: Keep your code to screen size, if you can’t, time to SubVi. It’s super easy in labview
    Inflexible Real Estate: A commenter dealt with it already
    Unmanageable Scoping Blocks: This is a case of Wires Wires. Keep the number of wires under control and reconnecting them is easy.
    Unbearably Slow: Ok, in the case you wrote, it’s kind of valid. But why do you want to modify code before seeing how the compiled version runs?
    Local Variables: Local variables are there to let you do stuff quick and dirty. I use them to initialize control values at the start of execution for instance. For hard core data passing between threads or between iterations of a loop, there are Functional Global Variables, which are a type of shift register. By the way, shift registers fall into the wired data flow model very nicely. With clusters or objects, you can essentially keep all your data in the shift register. Other than for GUI purposes, you shouldn’t use local variables. There are also queues and notifiers and semaphores. I feel you are bitching about this one just to bitch. You know they are problematic, you know what to do instead, what’s the problem? That LabView lets you decide the pros and cons of using a particular paradigm? Then lets bring up C and overruning arrays
    Copy and Paste: Ok, it’s not straight forward, but it’s very easy to run an array through a “convert array to spreadsheet string” VI and just cut and paste the string. Big deal. You want to visualize data, I actually prefer LabView to excel for this. Because LabView graphs are way more flexible and accessible. Especially if you have a fixed data file format and want to do some kind of batch processing.
    Verbosity of Mathematical Expressions: There are formula nodes and math nodes and even Matlab nodes for doing complicated multivariable math that you want to see in a text format.
    Counter Intuitive Parameters: For some reason, I never thought that the VI connector pain was a pane. Honestly, this is one of the first things you learn, and once learned, what’s counterintuitive? There are plenty of ways to synthesize a VI with connections in place. you can select a chunk of code and select “Create a SubVI” from the edit menu for 1. All connections already there. The nice thing is that once you made your subvi, it’s easy to make sure the right data is routed to it because data type is immediately obvious.

    Look, just because you’re bad at LabView doesn’t mean that LabView is bad. The thing to remember when switching from text based programming is that LabView follows the same rules, but has different conventions and names. Like loops need shift registers to carry data from one iteration to the next. In text based languages once a variable is modified, it’s value stays the same until modified again. But keep in mind that the shift register is essentially a variable. Controls and indicators are not variables. They can be used as such, but really they are input and output functions.

  14. Vince Kelly says:

    Hello,
    Your blog about the problems with labview made me laugh. You’re certainly not alone and we ended up developing our own data acquisition/system control software for PCs called ComScript. I always found the labview graphical language hard to work with and just understand. Then it crashed frequently so we just couldn’t use it for our long term monitoring projects.

    ComScript uses a simple xml based language and we just completed the script editor/builder to automatically write code. I know it’s not the end all be all, but ComScript is stable, has good functionality and uses text based code.

    Would love some feedback from labview users. http://gescience.com/ComScript.html

    Thanks, Vince

  15. Rob MacLachlan says:

    As the person that caused Brian to have to deal with Labview, I can say that
    he is indeed an excellent programmer in C++, and did a credible job in
    Labview. I myself have an unusual background which surely contributes to my
    peculiar opinions about programming languages and environments. I worked for
    10 years at CMU in the computer science department developing compilers for
    Common Lisp and Dylan and working on these language standards. For various
    reasons I had become somewhat burned out on the whole programming language and
    environment thing, and worked to retread myself as an electrical engineer with
    a particular interest in analog and power electronics. I switched to working
    in robotics because it got me closed to designing hardware. At first I just
    used C++, then added Matlab to the mix. Of course I had the usual problems
    with cryptic errors on dereferencing NULL or deallocated pointers, and it was
    quite annoying given my background with garbage collected languages. At the
    same time I was also programming PIC microcontrollers, which made gcc and gdb
    seem quite luxurious.

    When I started working on the Micron project there wasn’t a lot of code, and
    what there was needed to be rewritten, and my boss didn’t care how I got it
    done.

    One of the drivers was that we had some need for GUIs, and one of the things I
    had learned over the years was that I hated writing GUIs, at least using the
    tools I had been using. Also, the lab had been pretty much a Windows shop,
    while I had only ever programmed under Unix. I did make a first pass
    modifying existing code using the win32 APIs and GUI builder, and I did hate
    it. The win32 APIs were every bit as bit-twiddling low-level as Unix (error
    prone), but also just plain different. I didn’t want to have to learn to
    program that way all over again.

    I had heard about labview being something that people used in research data
    acquisition applications, and decided to check it out. It did meet the
    requirements of easy GUI creation and also (importantly) hiding the windows
    API so that I didn’t have to learn it. At first it was just a signal
    processing app that ran at 100 samples/sec with data blocking for offline
    analysis. This was indeed a fairly classic labview application, well withing
    its sweet spot. Then we decided to using this sensor system inside our
    feedback loop, requiring 1000 samples/sec. This required using the labview
    real-time module with deployment to a separate RT target. This was a somewhat
    more sharp-edged environment with worse debug support, but it did get the job
    done (and RT has continued to improve in usability.)

    As a language designer and implementor, and someone who also worked on a
    non-text-based environment for a text-based language (the Gwydion project) my
    observation is that Labview happens to have some properties with significant
    benefits: significantly differ from traditional sequential imperative
    languages and text-based enviroments.

    One: it is a largely functional language (discouraging use of side-effects).
    It is quite possible to have functional text-based languages, but explicitly
    representing the data dependencies as wires does reveal an important aspect of
    parallelism. My guess is that Labview’s functional nature is a historical
    accident driven by the choice of the graphical representation, but it’s
    certainly coming in handy in the world of multi-core processing. Functional
    code is also safer than code relying in implicit side-effects and sequencing,
    but it requires relying on the compiler to avoid unncessary copies, which can
    be a problem when the compiler lets you down.

    Two: code is rich. It has semantic associations that you can’t see right
    away. This is what enables the fluid evolution of labview GUIs. The GUI is
    not some attribution “on the side” of the code that can become inconsistent.
    Labview knows which terminal is associated with which GUI widget. Because the
    code is rich, then association between the GUI (control panel) and the code
    (block diagram) can be enforced to be semantically consistent. You could make
    code that looked like text, but was rich (that was what we were working on in
    Gwydion) but once you take the hit of abandoning the idea that the text files
    are definitive (which is a big problem for source control), why limit yourself
    to stuff that looks like text?

    In summary, functionality is (mostly) good, and is synergistic with therather
    graphical data-flow representation. Rich code is (mostly) good, and enables
    graphical programming. Neither of these deep semantic distinctions of labview
    are equivalent to the observation that the language is graphical.

    So far as Brian’s complaints about wires, spatial dependencies, real-estate:
    Yes, I do spend a lot of time deciding how to lay things out with a logical
    flow and rearranging, getting structures that are similar to look similar, and
    so on. It’s very much like an electronic schematic in that way. That is a
    hit which is much less severe in a text-based language. There are so many
    semantically insignificant degrees of freedom that you can spend time
    optimizing. It’s hard to say whether it’s overall worthwhile in comparison to
    a text-based language, but if you *are* programming in Labview, then I think
    it’s worth being patient with that sort of thing. It pays off down the line
    in code that is easier to understand, IMO often easier to understand than a
    text program.

    It’s definitely a weakness of labview that numerical expressions look *so
    different* from standard linear infix notation. Whether this is better or
    worse in an absolute sense is hard to say, but FORTRAN did try to translate
    formulas as naturally as it could, given character set limitations. You can
    use infix notations using formula blocks and expression nodes. I use
    expression nodes when I have a unary expression, but mostly avoid formula
    nodes. For one thing, the formula language isn’t vectorized.

    So far as arrays go, even given the notational awkwardness of the graphical
    representation, Labview is way better than C++ because it is vectorized,
    rather like MATLAB. You can get some of those effects in C++ by overloading,
    but it’s already there in Labview, and you don’t have to worry about the
    buffer management and in-place optimizations. In MATLAB you can write some
    beautifully concise and cryptic array expressions. Translating MATLAB code
    into Labview is a nontrivial operation, which I have done manually a few
    times. Labview has a “mathscript” feature which allows embedding a matlab
    subset into labview. I haven’t tried this recently because there is an extra
    fee to use on real-time. I’m not entirely optimistic that the results would
    be as efficient as hand-written Labview code.

    Some of the cut-and-paste difficulties are pretty intrinsic in the grapical
    code model, I think. But there are advantages of rich code besides the GUI
    integration. I like commenting in labview, and I consider it a considerable
    strength that I can embed images in the block diagram. I often put in scans
    of pencil-and-paper diagrams that I always draw to understand what
    I was doing, but in text based languages I ended up losing or throwing away.

    Overall, I think that robotics is an application that does fall into labview’s sweet spot. The biggest advantage of labview for robotics is that you can in many cases (where the rate of significant change is less than a few Hz) you can directly visualize relevant time-variation, and where the change is faster, you can still visualize in real-time using graphs that update every second or so. In my experience doing robotics in unix, what I ended up doing was debugging in batch mode, writing out trace files and then reading them in emacs or visualizing them using matlab. But as well as being slower that seeing the change in real time, it is simply different, and IMO often inferior. When the system is running in labview, you can mentally correlate in real time the displayed state of the software with what you can see with your eyes and hear with your ears. The ease of making animated grapical GUIs is crucial here.

  16. Thanks Rob, for chiming in with the insightful post! The functional aspect of LabVIEW and the intertwined nature of data flow with GUI are definite strengths of LabVIEW, and the number of really cool real-time visualizations you can do very simply is quite amazing when you come right down to it. Replicating such sophisticated plotting tools yourself in real-time (as opposed to dumping data out to analyze offline in Matlab, which is certainly a more tedious two-step process (although I’d argue what you can do with the data once in Matlab is more powerful at the cost of often significantly more initial script-writing time), is certainly a huge kudos to LabVIEW platform. Plus their data-acquisition to NI hardware is, of course, quite nice compared to writing your own drivers 😉

    And gosh, yes, raw Windows API is terrible to have to develop anything more complicated than, well, hello world. Perhaps the best thing I took away from LabVIEW was how bullet-proof the final system is. Sure there are random connection issues sometimes and you need to know to close a particular subVI before running the system or it will stutter, but I do have to admit that as far as depending on LabVIEW software, I never needed to worry about hunting down double frees or random segmentation faults. When done right, LabVIEW is incredibly professional – which made it an easy-to-use, robust platform that was fairly painless to integrate with – as long as I didn’t need to modify/add to any of the underlying LabVIEW code itself 😀

  17. Synaesthete says:

    Great conversation here, nicely balanced points-of-view. Issues with the programming environment aside, one of the big issues I’ve run in to relates to large-scale architecture, the nature of software design, and LabVIEW’s ability to be scalable and maintainable.

    LabVIEW does very well in a producer-consumer architecture for simple data acquisition and control, usually in a bench environment.

    It’s well established that a “separation-of-concerns” approach to software architecture yields huge benefits in terms of code re-usability. That is to say, the advantages are greater than code re-use derived from appropriate style guidelines or simple modularization (such as building a hardware abstraction level). A true separation-of-concerns approach (model-view-controller, for example) allows for modularity on a larger scale, permitting sweeping changes to user-interface as well as changing out hardware with ZERO modifications required in other sub-systems (this is possible with a mature architecture).

    The solution to a separation-of-concerns approach in LabVIEW is to use LVOOP and the Actor Framework. The trouble with the Actor Framework is that it’s a bit of a ‘hack’, in the sense that it uses a lot of existing LabVIEW constructs in atypical ways, and there’s a lot of overhead involved in creating a new “Actor” (lots of new sub-VIs, class overrides, type defs, etc.) It’s just a lot more time consuming and requires more mental acrobatics than it ought to. This is due to the LabVIEW environment rather than the architecture of the Actor Framework.

    Another issue is related to the fundamental nature of software development. The waterfall vs. spiral model to software development is a battle won a while ago–software naturally follows the spiral model. It’s a process of iterative research, discovery, experimentation, testing, and revising. It should be the ultimate goal for all programmers and software companies to move towards their own mature code base that requires very little fiddling with. Platforms mature through a cycle of rebirth–I am currently working with a mature platform that I have effectively rewritten in entirety 8 times in the past three years. Because of LabVIEW’s spatial/physical metaphor, it’s easy to create a ‘mess’ when disassembling code… that is, it’s easier to wire up code, but not as easy to refactor. If good software practice involves refactoring as a positive, recurring part of the process, the act of refactoring should be as fluid as the initial coding.

    After years of working with LabVIEW as well as text-based languages, and giving it careful consideration and a fair side-by-side comparison, I can’t say I would recommend LabVIEW to anyone at this time. Maybe 10 years ago, but the world of text-based programming is reaching a level of performance, ease-of-use, and industry adoption that LabVIEW isn’t going to see soon, if ever–certainly not in its current form.

  18. Anon says:

    Wanted to touch on some quick tips I’ve found to be very handy when trying to escape the constraints of the LabVIEW environment:

    1.) Move towards representing ‘all’ of your test and configuration data in JSON (https://decibel.ni.com/content/groups/interactive-internet-interface-json-toolkit-for-labview)

    2.) Write standard wrapper VIs to interface with NI hardware, using JSON to pass both commands and data to/from that interface

    3.) Expose your hardware to the network using TCP/IP or UDP such that only JSON is handled over that interface

    4.) Write most of your application in JavaScript and HTML5 (or some other language), and interface to your NI hardware using a WebSocket. The only reason I recommend the JavaScript/HTML5 approach is that it then becomes easy to interface with your hardware from mobile devices, etc…

    It’s a very modern approach, and you get the best of everything–good performance where it counts and amazing cross-platform interfaces that are all network-connected. Because HTML is essentially a “document”, you also have the added advantage of working with “live reports”–that is, live hardware-data and analysis that’s sent in real-time directly to a report format–in some cases you can skip much of the “UI” altogether, depending on your application.

    This approach reduces the LabVIEW to an absolute minimum, so it’s only used to handle interfacing to NI hardware. Then, imagine using a sexy jQuery interface with Canvas charts and graphs, going to look a lot nicer than native LV GUI.

  19. bob says:

    Source/revision control… a la RCS on Linux… can’t live without it… Can LabVIEW graphical code be RCS’ed? If so, then I would be willing to consider working in LV to be a ‘schematic drawing task’ as suggested above and try to enjoy it’s strengths rather than suffer it’s weaknesses, but without RCS I’m dead in the water. I don’t see how other software engineer/programmers can live without it either.

  20. Matt says:

    @bob: We put our LV projects in SVN but I am not very satisfied with how it works. The basic problem is that often VIs get modified by LV automatically when some other VI in the project is changed. Why these changes happen is often not understandable for me. A common scenario is that I fix a bug in one VI and then 5 changed VIs get committed. With a good commit message it works but a text-based language teams up with SVN a lot better.

  21. Jams says:

    @Matt: Historically, LabVIEW saved compiled code in the same file as source code, and that is likely causing your problems (recompiles change the file). In more recent versions one can select an option to separate compiled code from source files (http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/saving_vis_compiled_code/). Doing that greatly improves use with source-code control.

    — James

  22. r.e. source control and separate compilation

    Unfortunately, National Instruments, in their infinite wisdom, decided to break separate compilation in Labview 2012 so that the benefits for source control are almost entirely defeated, at least for how our system is coded. What kinds of dependencies can there be between VIs? The main ones are typedefs, and due to some problem that I don’t entirely understand, but I’m sure is not easy to solve in general, you can get odd results when you suck in a VI that was compiled against an old version of the typedef. I never had any problems, but they’ve decided that all VIs that use a typedef need to be recompiled when the typedef changes. The also did an aggressive recompilation on the LV12 version change, which was not the policy on former upgrades.

    I think that a big part of the problem is that sophisticated users who what version control are a small minority of the labview market, and they want to preserve intuitive behavior for people who don’t even know what compilation is.

    Rob

  23. To clarify, by “recompilation” in my above remarks, I mean dirtying the source file. The actual compilation products go into a cache elsewhere in the filesystem. They force the VI to be dirty, even though the the lvcompare file comparison tool can’t find any difference. This is clearly broken.

Leave a Reply

Your email address will not be published. Required fields are marked *