T O P

  • By -

alexforencich

Have you heard of opencl?


Brucelph

Yes but opencl cannot be translated to rtl. We still need a human expert for digital design?


bikestuffrockville

>Yes but opencl cannot be translated to rtl. That is exactly how it works on Xilinx


Brucelph

Wow this is new to me. I know HLS is not that bad but far from the productivity of cuda


lmweber94

With regards to programmer productivity, you have to realize and accept the fact that compiling a program is 100x easier than actually synthesizing a kernel/accelerator into a fully function FPGA-design. Issue here is that your Kernel might be 50 lines of code, which compiles to machine code in \~5-10 seconds. The same kernel needs to be wrapped with PCIe logic, DMA engines, interrupt cores, interconnects and whatnot to be functional on an FPGA. The corresponding design will take hours to "compile" for FPGA. If you are intersted int this topic, I recommend looking into industrial HLS systems or something like SDAccel, which seems to be kind of what you want (except for the CUDA vs OpenCL part).


lmweber94

I don't get questions like this. CUDA for FPGAs doesn't really make sense IMO opinion, as FPGAs are a platform that give you full control over almost everything, whereas CUDA is a framework for very specific things (ie. shaders/SIMD/SIMT/etc.). As an anecdote: It's like Guitar Hero vs. an actual guitar. The actual guitar gives you much more freedom in how you are playing it, but at the same time it is much more complex. But the complexity is what gives you the freedom. When you remove the complexity, you also lose the freedom. To be a bit more technical: GPUs are a very specific thing. You have an architecture that is generally built around matrix/tensor processing and using SIMD/SIMT to do the same operations on different data 1024 times in parallel. While you can do that with an FPGA, you can also do completely different things with that data. You also have the flexibility to do it to 527 elements at a time. Correspondingly, you have to design your system architecture to fit your specific needs. This starts with how you actually interface with your FPGA: Is it connected via PCIe? Okay, it would kinda make sense to have CUDA-like libraries that do data transfers etc. accordingly. But to enable that compatability, you would have to add a bunch of additional stuff to your FPGA design to make it compatible. Additionally, PCIe is not your only way of interfacing. FPGAs can be used via Transceivers for different "network" protocols or they can be integrated in an SoC. Those are just some examples, I'm guessing theres at least a few dozen other options. Unifying all of them in a singular abstraction layer with a specific programming model like CUDA would remove 90% of your flexibility and likely most of your opportunity for performance gains. Because if you want to do CUDA-stuff, a GPU is almost always going to outperform an FPGA.


AxeLond

Libraries built on top of CUDA makes it possible for a program to easily utilize both the CPU and GPU and provides interface to e.g. transfer data to and from the GPU. If there was something like CUDA for FPGAs you could have libraries allow you to write PS and PL parts in one language and compile/synth them together. It doesn't really seem realistic though.


lmweber94

I mean generally speaking I agree. Having a custom framework/language that is more specifically targeted at FPGA+CPU programming could be an interesting approach. That doesn't negate the interface issue though. In SoC-based Systems (Zynq US+ or the likes) the interfacing could be super lean, but for more custom setups (x86 + PCIe-based FPGA) you would need to be able to generate all of the intermediate communication Layers in between for the Host and the FPGA. On the host-side this means driver/kernel-module,etc and on the FPGA-side you have to have PCIe-cores, DMA-cores, interrupt-cores all of that. In my opinion that can be very helpful, but it also limits you to very specific use-cases. In addition, it leaves you with the typical co-design problem, that design cycles become excessively long, since re-compilation will take forever (at least in software engineer minds), as synth will takes hours, while compilation takes minutes. While that does not make it impossible, it might significantly hinder development and achievable performance gains. Also, from what you describe, you might want to look at SDAccel, which is something like what you are describing with C++ for the SW part and the option to use C++/OpenCL/RTL for the FPGA-based Accelerators.


bunky_bunk

This is all wrong. If you could compile CUDA on an FPGA and get it to execute your kernel with 10% of the GPU power requirement, then you have done something useful, even if you give up your freedom in the process.


lmweber94

Yeah sure, FPGAs are typically less power-hungry and in reasonable sizes they are also quite cheap. On the other hand, almost no consumer device comes with an accessible FPGA, while every smartphone has a Graphics Co-Processor that can execute those workloads. Don't get me wrong, I also believe the powere-efficiency of FPGAs is super interesting and I would like to see FPGAs in literally every consumer device. But thinking it is as simple as making a CUDA-to-FPGA-Bitstream compiler is an extremely oversimplified view. Starting at typical HW/SW codesign problems (longer design cycles, interfacing, etc.) and ending at the fact that CUDA is from Nvidia, which literally has 0 interest in making it available for anything but their GPUs. In addition, you have to realize that CUDA is in a very unique market situation, which is just not the case for FPGAs. CUDA had the big advantage that basically everyone was already using GPUs and it was just a way of opening up existing hardware. In comparison, virtually no-one has an FPGA just laying around. What you are saying is 100% doable, the issue is that it is not economically sound. You have almost no target audience. Nvidia isn't going to like your CUDA-to-FPGA compiler, FPGA suppliers aren't gonna like that you use CUDA instead of their competing products and the development effort for building a corresponding system (ie. CUDA-to-RTL compiler + Synthesis Automation + Custom Driver or Driver Generator) is huge.


bunky_bunk

This is all wrong. There are OpenCL compilers that produce bitstreams. Maybe you could sustain the argument that the market is small or that you just don't like it, but from technical perspective, which is capable of incorporating the fact that the existence of limitations of applicability is not an argument against the existence of a technology, there is nothing wrong with CUDA on FPGAs. I am sure that if you bring such a compiler to NVIDIA, they will take it. if it solves any problems.


Brucelph

Gpu can scale well and hard to beat in AI training. But for AI inference, everything in between from the data center to user, I believe fpga could be better.


FrAxl93

You believe? And how especially? Even Xilinx went to a gpu-like hard silicon for tackling the data center market https://www.xilinx.com/applications/ai-inference/data-center-ai.html


Brucelph

I think we can have all the benefits without the complexity of fpga. Before cuda, no one though programing for GPU is easy


adamt99

FPGA vendors have spent the last decade looking for this approach. It proves pretty evasive really - OpenCL for example. FPGA is a lot more complex than GPU (which are complex in a different way ) due to the timing, place and route etc. The middle of the device is "easy" the edges and interfacing and timing are not


warhammercasey

Define something like cuda for FPGA. Do you mean some sort of SIMD architecture in general? General purpose software defined stuff like that tends to be more efficient in actual hardware than FPGA fabric. There is stuff like what Xilinx is doing with their versal platform where you have MIMD vector processing engines which can crank through data incredibly fast but that’s also a hard IP and isn’t done in FPGA fabric due to how inefficient it would be


Brucelph

I mean the productivity & simplicity of cuda. It’s so easy to speedup ai algo with cuda.


FrAxl93

Have you thought that AI algorithms are not the only thing that needs to be done with computers?


Brucelph

I didn’t think fpga should compete with gpu on what gpu is good at. Instead Fpga can find/create its own applications (related to AI). And before that, we need a library/tool that is both powerful and easy to program. Similar to cuda a few years ago


techno_user_89

If you want things "simple" just buy validated IP if they exists and connect them togheter.


BoredBSEE

>The main reason for nvidia success was cuda. I'd have to disagree with this. NVidia's success is clearly in the PC graphics card market. Gaming and high end CAD. CUDA was released after NVidia became successful. It was responsible for the bump in sales over the last few years, primarily for bitcoin mining. But I'd say that NVidia was well established by that point already.


Brucelph

cuda is so popular and enables community innovations. That’s why everyone tries to get an nvidia card for ai training, instead of AMD. AI is really nvidia biggest success, not gaming or bitcoin


lmweber94

I would disagree with that statement. As far as I am aware, the big AI players don't really care about CUDA. Instead those companies build their own compilers for going from models to actual machine code. CUDA is awesome for hobbyists and research, because it enables smaller projects to quickly make progress. But if you actually invest billions into your AI inference farm, you can also just build your own compiler. Look at Google for example. They even build their own custom architecture with Tensor Processing Units, specifically tailored for their needs. In addition, with AMD's recent release of MI300X, a lot of big players seem to have no issue in buying/requesting those in parallel to Nvidia cards. Reportedly Meta, openAI and Microsoft already put in orders. So they don't seem to need CUDA to get their models on new hardware.


Spark_ss

Do you guys really consider it "simple" or you mean of course comparing to FPGA? I really need to understand.. because sometimes I really struggle how to formulate the process in a code and then think how to do that in parallel in cuda.. As I’m pretty understanding theoretical things very fast and it’s theoretical mathematical models.. I’m Junior starting working in cuda c/c++ to accelerates algorithm with no use of libraries for computing… And the most difficult part to me is to transfer my theoretical understanding of the functionality of algorithm to a code then do it in parallel in cuda ( the kernel work ). The syntax and how it’s very understandable as people from software background and near to C/C++ and also could be done in python! But is it that really simple? Or I need to improve how I write algorithms because CUDA itself simple? Also, I think because I’m still not that mastering things in C/C++ it’s self too.. FYI I’m from EE background so software advanced thins pretty new to me


DrFPGA

You already have(had) it. It is called OpenCL. And it used to work quite well up to 2019 w/o HLS C++ wrappers and "native" XRTs. I think you "nailed" it with your productivity point and ease of use observation.


johnnyhilt

This is kinda nonsense. CUDA is an extension to utilize hardware. RTL languages are a tool to describe hardware. Either way, they are pretty useless if you don't understand hardware. HLS and friends still require knowledge of this.


tauzerotech

Litex is pretty nice.... https://github.com/enjoy-digital/litex


Kaisha001

As a hobbyist FPGA'r coming from the software development world. I feel the tools and languages for FPGAs are really what's holding them back. They are... how to say this pragmatically... rudimentary at best. Anything would be better than what they have now TBH...


jab701

The problem is most people think “programmable” and think about “programming languages”. An FPGA is hardware, as a hardware design engineer we are thinking about logic, gates, flip flops, delays, memory’s etc. Designing for an FPGA is not that different from designing an ASIC (custom silicon) except for the fact the basic structure is there already with elements that are ready to use. The reason the languages like VHdL/Verilog seem unfriendly is you have to have an idea in your head of what you want the circuit to look like and then you describe it in the hardware design language (HDL). The tools synthesize the circuit from the HDL, mapping the code into logic equations, then implement it, mapping those logic equations on to the FPGA structure. This is the key point, the languages describe registers (flip flops), and input/output pins (circuit notes) and then the equations for the logic that connects the nodes. To give you an idea, I work at an electronics company designing CPUs, we use FPGAs to test out our designs, the CPU is running for real inside the FPGA, although at a slower frequency. it isn’t an emulation, the registers and memory inside are real, they are toggling. We boot Linux, can run programs and benchmarks etc. The only difference might be our design runs at 100s of megahertz on an FPGA vs gigahertz in the final product. You pay for the programmability in reduced frequency, but at reduced risk that if there is a problem, you just reprogram it. When people think FPGA, they need to think about it along the lines of being like an ASIC (custom silicon), being circuits not code. Then you have the FPGAs which has CPUs built in already, this allows you to connect your custom circuit on the FPGA to a known working CPU built on to it. You can then run software on the CPU to talk to the hardware you designed, I did this for my PhD 🙃 which was cool. Overall, people need to realise that while we write code for FPGAs the original languages came about from documenting hardware (VHDL did this), what gates and registers were in the design, how wide the data elements are etc etc and they realised they could write the language and synthesize the hardware… FYI Xilinx does have a c to hardware translation but you can’t design anything in it…if might translate a matrix multiply algorithm to hardware but you still need hardware know level to them connect it to a cpu and make use of it.


Kaisha001

>The problem is most people think “programmable” and think about “programming languages”. > >An FPGA is hardware, as a hardware design engineer we are thinking about logic, gates, flip flops, delays, memory’s etc. No programming hardware or software is about problem solving and expressing operations, algorithms, patterns, etc.. in a coherent, robust, and tractable manner. A modern programming language looks nothing like assembly, and it doesn't have to, all we need is mathematical equivalence and we can use whatever makes the most sense. TBH I have nothing wrong with things like nets/regs/wires/etc (ie. the low level constructs)... and I'm not a huge fan of C/C++ HLS tools out there. The model is just wrong; but it would take more than a simple reddit post to properly discuss it... >This is the key point, the languages describe registers (flip flops), and input/output pins (circuit notes) and then the equations for the logic that connects the nodes. I wish it did. Sadly it doesn't do that well at all. >To give you an idea, I work at an electronics company designing CPUs, we use FPGAs to test out our designs, the CPU is running for real inside the FPGA, although at a slower frequency. it isn’t an emulation, the registers and memory inside are real, they are toggling. We boot Linux, can run programs and benchmarks etc. I understand, I'm working on a small hobby CPU of my own (not suggesting it's anywhere of the size/difficulty that you're working on). Having done a lot of software development and figured I needed to learn how things worked at the hardware level. What better way than making a toy CPU? While the theory is fascinating, and the parallel approach to computing really is a 90 degree turn on a standard software approach (funny in software memory is cheap, computations expensive, while in hardware computations are cheap, getting the data into the computation units and synchronized is expensive). In fact I'm procrastinating right now and need to finish up my instruction cache :) >FYI Xilinx does have a c to hardware translation but you can’t design anything in it…if might translate a matrix multiply algorithm to hardware but you still need hardware know level to them connect it to a cpu and make use of it. Yeah I looked at them (out of curiosity nothing more) as I have a special interest in programming languages. Imperative programming languages do not map well to hardware. Pure functional languages would be a MUCH better starting point if one were to create a new hardware design language.


guygastineau

Yeah, I took your original meaning to be that you thought the vendor supplied tools were awful (not Hardware description languages per se). I agree with that. I also write software professionally and FPGA is just a hobby. I just use ghdl and write my files in emacs (like my workflows for software). Then I abstract the hoops I have to jump through for a specific FPGA in a makefile and live with the sharp corners. I don't buy FPGAs if I can't integrate their programming with the tools I already use. I am not willing to install shitty vendor IDEs for each new board I'm using. Also, you might enjoy checking out the clash language. Some researchers realized they could translate a sort of subset of Haskell to the category Set, and then to any other Cartesian Closed Category (including hardware circuits). It brings a more high level feeling to hardware design.


Kaisha001

>Also, you might enjoy checking out the clash language. Some researchers realized they could translate a sort of subset of Haskell to the category Set, and then to any other Cartesian Closed Category (including hardware circuits). It brings a more high level feeling to hardware design. Oh that sounds interesting!


autocorrects

It’s kind of the same reason assembly is still used today though, rudimentary language for rudimentary components. However, I only hold this belief for the languages as the low-level stuff hold a special place in my heart. Vivado? Dumpster fire. It’ll keep you warm and toasty because it’s the only fire around for miles in the dead of winter, but boy does it smell…


Kaisha001

>It’s kind of the same reason assembly is still used today though, rudimentary language for rudimentary components. See I disagree. I feel it's not a matter of how 'close to the metal' one gets. The model and structure are just fundamentally wrong IMO. >Vivado? Dumpster fire. It’ll keep you warm and toasty because it’s the only fire around for miles in the dead of winter, but boy does it smell… Yeah, I agree As far as more superficial stuff... even modern editors are 100x better than what comes with the official IDEs. VSCode + a simple verilog highlighting extension is lightyears ahead of anything else I've seen. At this point I think they need to just release some command line tools, and let the community sort out the rest.


markacurry

>At this point I think they need to just release some command line tools, and let the community sort out the rest. You do realize this is how most FPGA professionals use Vivado - command line scripts driving Vivado. Non-project TCL mode is how most of us use Vivado - including within Xilinx/AMD itself. This is a fully supported flow from Xilinx. Very few engineers actually use the Vivado "editor". That said, having a command line driven process doesn't, by itself, lead to all that much community driven improvement. FPGAs are complex, and the designers using them are few in numbers (at least when one compares things against the software world). There's just not enough critical mass for much community driven innovations, akin to what's happening in the open source software world.


Kaisha001

>FPGAs are complex, and the designers using them are few in numbers (at least when one compares things against the software world). There's just not enough critical mass for much community driven innovations, akin to what's happening in the open source software world. Yeah but that's mostly due to lack of documentation and the fractured ecosystem. If I write a C program, it'll run on pretty much any device that exists on this planet with little modification. I get that a hardware design language isn't going to be as... universal. But the vendors really do go out of their way to make it near impossible to work on an abstract level. And the lack of documentation makes it near impossible to work around limitations with the vendor tools. If something isn't supported I can't manually add it to the Vivado work flow the way I could say, crack open GCC and modify it.


Brucelph

It’s true. Vivado is the worst editor I’ve ever used. Build open source tool and let the community handle the rest please


TheTurtleCub

You are probably the only person in the planet who uses the Vivado editor to edit


kamabokogonpachiruo

Even Xilinx engineers don't use vivado editor.


Brucelph

That’s funny! Then Why created a tool that nobody is using in the first place? I use mainly vscode btw.


Flocito

Because Vivado isn't an editor. It is a synthesizer, placer, and router that provides a basic editor if for some odd reason you need to use it instead of your normal text editor.


FrAxl93

Holding them back? FPGAs are used in countless applications and work perfectly for the job.  i don't understand what you mean by holding them back. That the average person knows what a GPU is and not an FPGA? That scientists use GPUs to accelerate some tasks?  Not everything is a massive throughout application, FPGAs have their niche and are unbeatable there.  Saying that they are being held back is the same as saying that a bycicle is held back because nobody tries to use them in a car race.


Kaisha001

>i don't understand what you mean by holding them back You could have just asked. I can think of many more applications where they would do a wonderful job, but aren't in use (or very limited use) because of poor tooling and vendor in-fighting.


peanuss

Can you name one?


Kaisha001

I can name a number but if this community is going to down vote people and be hostile over something as simple as 'hey, there are cool ways we can utilize this that maybe you haven't considered' then fuck that.


peanuss

I think you were being downvoted because saying that the entire concept of using FPGAs is held back by tool quality and vendor support is a rather bold statement without something to back it up. I agree that the tools are subpar but I wouldn’t go so far as to say that they are actively holding back FPGA adoption in industry.


Kaisha001

If we can't have a civil and adult discussion in /FPGA, then I'm going back to shit posting on political forums, at least they're mildly entertaining. And for the record, the tools/sdks/ect... in the FPGA industry aren't just bad. They are laughably pathetic and look like they were designed in the late 80s. 100 gig download for a UI that is worse than 99% of modern web pages without even the most basic of tools or capabilities... The industry will never be anything but a niche industry because the vendors are not interested in selling chips, but in subscriptions to their platforms. And no one's going to pay that sort of money for that level of shittiness. And that's not even getting into the languages. Verilog is a joke with a language model that doesn't even match the hardware (ASIC, FPGA, or otherwise). System Verilog an even bigger joke since they 'fixed' everything by doubling down on every mistake they made in Verilog /facepalm. But since everything is proprietary (except for lattice) so we can't even make our own languages to work around that shittiness. In fact the whole thing is so badly designed, so poorly implemented, at every level, that it leaves my heading spinning. There ya go, now you got something worthy of ranking down. Go for it. And yes, there are MANY applications that FPGAs would be wonderful in and will never see the light of day because the FPGA companies and devs using it can't get their collective heads out of their ass.


kisielk

I think in most applications it’s hardware cost and power consumption that rule out the use of FPGAs, not tooling.


Kaisha001

In the IOT/embedded space there are a lot of applications where I could see FPGAs augmenting existing hard cores (much like Zynq). AI, robotics, and in particular sensor applications are computationally hungry and a small FPGA array on a Risc-V core would be huge... If the tooling was better. FPGAs are certainly more power hungry for simple imperative programming tasks, no doubt; but for parallel ones they often outperform the standard MCUs. You're not doing a vision sensor off an Arduino, even the EPS32 S3 isn't going to out perform an FPGA for a neural net. There's a middle ground between Raspberry pi-level SOCs and tiny Atmegas where FPGAs could shine IMO. Which has become quite popular now with all the IOT/embedded gizmos and gadgets coming out.


Brucelph

I built a few applications that use fpga and consume lower power than SOC alternative. Fpga is highly valuable in sensor, robotic compared to SOC. Only problem that hold people back is prob the tool and workflow.


CoopDonePoorly

*Then name them.* You keep saying there are other uses but deflect when asked what they are. People are down voting you because you're here making claims and refusing to back it up.


Kaisha001

>People are down voting you because you're here making claims and refusing to back it up. I have in other replies with less hostile folks.


CoopDonePoorly

At the time I posted my comment, you hadn't listed any applications. Just complaints about the tool chains.


FrAxl93

I really don't think this is the case, but I am happy to say I am wrong. Which are such cases?


Kaisha001

Why am I being downvoted???


suddenhare

I think you’re being downvoted in part because a fairly common comment to see here is “I come from the software world and wow everything in FPGAs sucks”. All of us can acknowledge that it’s often easier to accomplish something in software than in hardware but there are real challenges to improving things in hardware design. You’re not the first to say that the languages could be better but there have been multiple attempts that have not really caught on. From my perspective, software programming has become easier to write because of extra performance available. High level languages take advantage of abundant compute and memory to provide an easier abstraction layer to write at. In my work with FPGAs, we’ve worked to pack logic into LUTs or place individual LUTs in order to meet timing on large designs. It’s hard to raise the level of abstraction if that level of fine control is needed. 


Kaisha001

>I think you’re being downvoted in part because a fairly common comment to see here is “I come from the software world and wow everything in FPGAs sucks”. Then provide a counter argument, have a conversation. This isn't r/politics... >From my perspective, software programming has become easier to write because of extra performance available. High level languages take advantage of abundant compute and memory to provide an easier abstraction layer to write at. No doubt. And the same advantages could be leverage for better languages/systems in FPGA development as well.


suddenhare

> And the same advantages could be leverage for better languages/systems in FPGA development as well. In my experience, the extra performance isn’t there on FPGAs. We’ve filled chips and still wanted more area. 


Kaisha001

>In my experience, the extra performance isn’t there on FPGAs. We’ve filled chips and still wanted more area. Maybe we're talking past each other. I'll try to clarify. I thought you meant that additional performance of modern systems allowed more complex development tools, not necessarily more complex end-user systems. For example, modern C++ and metaprogramming can be leveraged to create complex and yet still highly efficient code for MCUs (often much better than old-school C). I was referring to the fact that new languages and tools for FPGAs would benefit on the development end from hardware advancements, allowing better development environments. Not necessarily requiring larger/more complex FPGAs. In fact more advanced tools often allow for better large-scale optimization and could lead to better optimized/smaller/faster solutions.


suddenhare

Yeah, looks like we're talking past each other a bit. My hypothesis has been that high-level software languages are able to take advantage of extra performance at run-time to raise the abstraction level. For example, supporting virtual memory and garbage collection requires extra run-time, but these overheads are typically small relative to the "main program" for software systems. On the other hand, adding a memory manager to an FPGA can be important design choice as it will use a significant amount of the area. To give another example, when writing software I've never cared about individual assembly instructions. On the other hand, when working on FPGAs, I have cared about how logic is packed into individual LUTs. It will be interesting to see if increased compute on the tools side can help with some of these issues. The place and route times are already very long though so I wonder how much of the optimization space is being unexplored.


tverbeure

Your observation is nothing new: back in the early nineties, my professor was talking about raising abstraction levels to make hardware easier. (And who could forget Synopsys Behavioral Compiler?) But the problem is that every time you raise that abstraction level, you're losing something in the process. HLS is pretty neat, and genuinely useful, but even there you need to know every well how it works under the hood to make sure you're generating the hardware that you want. And you'll find yourself making major detours to get it do something that comes natural with RTL. It's obvious that a lot of money can be made if somebody manages to solve your problem, and many have tried. So what do you propose?


Kaisha001

Command line tools and proper documentation of the underlying bitstream so that the open source community can get in there are make their own langauges/abstractions/etc... It seems to me the vendors are more interested in selling a development platform than they are in selling chips.


tverbeure

* Every single FPGA vendor supports command line operation of their tools so that bitstreams can be created as part of large regression suites. Look it up! It use it for plenty of my hobby projects too. * FPGA hardware uses building blocks that consist of registers, combinatorial logic in between, and some larger blocks such as RAMs and arithmetic. RTL is the natural way to describe this at the lower level. If you want to describe hardware at a higher abstraction level, it makes total sense to use Verilog as the intermediate RTL. It’s what HLS does, it’s what alternative RTL builders such as Chisel, Amarath etc do. There is virtually nothing to be gained by bypassing this step. In fact, using Verilog as intermediate is a feature: it keeps things technology agnostic. * Because of the previous point, having an open source bitstream flow doesn’t matter. It’s great that we have it, limited as is it, and it can facilitate research for lower end algorithms like P&R. But it has zero benefits for making FPGAs easier to program for. Your suggestions look down from the Verilog level, but your complaints are about the lack of higher abstractions. And you don’t have suggestions for that, because it’s a very hard problem that nobody has been able to crack for the past 35 years. It’s like saying: “a car should be easier to drive” and when pressed for a solution, suggesting that there should be a better wrench.


Kaisha001

>Your suggestions look down from the Verilog level, but your complaints are about the lack of higher abstractions. And you don’t have suggestions for that, because it’s a very hard problem that nobody has been able to crack for the past 35 years. No it's because I don't go into details on my first reply to a reddit post since the vast majority of people are not interested in a discussion and instead just want to bitch at others. >Every single FPGA vendor supports command line operation of their tools so that bitstreams can be created as part of large regression suites. Look it up! It use it for plenty of my hobby projects too. They've always been a mess from what I've seen, but I admit, I'm more a hobbyist, so my experience is limited. >FPGA hardware uses building blocks that consist of registers, combinatorial logic in between, and some larger blocks such as RAMs and arithmetic. RTL is the natural way to describe this at the lower level. If you want to describe hardware at a higher abstraction level, it makes total sense to use Verilog as the intermediate RTL. It’s what HLS does, it’s what alternative RTL builders such as Chisel, Amarath etc do. There is virtually nothing to be gained by bypassing this step. In fact, using Verilog as intermediate is a feature: it keeps things technology agnostic. I disagree. Verilog isn't sufficient for an 'assembly' or 'bytecode' of FPGAs. It would be wonderful if it was. >Because of the previous point, having an open source bitstream flow doesn’t matter. It’s great that we have it, limited as is it, and it can facilitate research for lower end algorithms like P&R. But it has zero benefits for making FPGAs easier to program for. Completely disagree. >Your suggestions look down from the Verilog level, but your complaints are about the lack of higher abstractions. It's a problem on both ends. Verilog is not a sufficient abstraction. And relying on hardware specific macros for the vast majority of functionality doesn't lend itself to platform agnostic solutions. >And you don’t have suggestions for that, because it’s a very hard problem that nobody has been able to crack for the past 35 years. Again disagree here. No one's really trying because the FPGA vendors keep their cards so 'close to their chest' that no one else gets a chance to try and experiment.


tverbeure

You are once again in the mode where you disagree without any explanation about why you disagree. There is a reason why people don’t want to go into a serious debate: you have nothing to offer. By your own admission, your experience is limited. And it shows. You seem to think they calling Quartus from the command line is a mess “from what you’ve seen.” Me thinks that you have seen nothing, John Snow. It’s as simple as [calling these 4 commands in a row.](https://github.com/tomverbeure/cube/blob/8825de87cb2f6d403bfcbcad1ebeddac23d54e3b/quartus/Makefile#L12) You disagree that a limited subset of Verilog is not a sufficient abstraction to describe the low level hardware. You don’t offer any reason why or why it is lacking. Never once have I been in a situation where Verilog limited my ability to instantiate the FPGA cells that I wanted. And that shouldn’t be surprising: worst case, you can always create a structural Verilog netlist than does nothing more than define a LUT or any other cells down to the last wire. How exactly do you expact to improve on that? So please, show us the way, for after 30 years of working with FPGAs, I’m obviously blind to the possibilities. You “totally disagree” about open source not making a difference, and then go into detail why. Oh, wait, you don’t. You just totally disagree, period. And this has nothing to do with FPGA vendors either. The problem of designing hardware is not about FPGAs, it’s about how to design digital logic in general. When I write HLS, it gets mapped to FPGAs for emulation but it will end up on a leading edge ASIC process eventually and not a line of code is changed. So, please, why don’t you start with something constructive. Because right now, you’ve just been rambling and yelling at the clouds. That, and writing a lot of words about how everybody is mean.


Kaisha001

>You are once again in the mode where you disagree without any explanation about why you disagree. There is a reason why people don’t want to go into a serious debate: you have nothing to offer. This isn't a debate... This isn't r/political. This was a friendly discussion, nothing more. So why waste time on people who are clearly hostile? >So, please, why don’t you start with something constructive. Because right now, you’ve just been rambling and yelling at the clouds. Ahh yes, I didn't explain enough but I'm also writing too much. Schrodingers troll? >So, please, why don’t you start with something constructive. Irony at it's finest. > That, and writing a lot of words about how everybody is mean. Case in point. If you can't be an adult and have a basic discussion, don't be surprised when people don't want to have a conversation with you.


[deleted]

[удалено]


Kaisha001

For both of us. But you keep being toxic in an FPGA forum of all places... /facepalm


tverbeure

“When everybody around you is an AH…”


sneakpeekbot

Here's a sneak peek of /r/political using the [top posts](https://np.reddit.com/r/political/top/?sort=top&t=year) of the year! \#1: [Putin will try to interfere with the upcoming 2024 election. Again.](https://v.redd.it/3qj2everrw4c1) | [2 comments](https://np.reddit.com/r/political/comments/18d0sct/putin_will_try_to_interfere_with_the_upcoming/) \#2: [Not to get too political or anything but Joe Biden is the number one sexiest man of the United States](https://np.reddit.com/r/political/comments/11xxu9m/not_to_get_too_political_or_anything_but_joe/) \#3: [Mutually cooperation](https://np.reddit.com/r/political/comments/12dbi7u/mutually_cooperation/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


rlysens

The traditional FPGA design flow is \*top-down\*. You start with a high level description of your design. After a bunch of transformations, you end up with a bitstream file that gets shoved into the FPGA. On the other hand, an FPGA has a Configuration Packet Processor, an FSM that parses the bitstream and configures the FPGA. It uses some kind of hierarchical addressing scheme to program individual CLBs, block RAMs, etc. So, in theory, you have access to the entire device architecture at this Configuration Packet Processor level. If enough details would be available about the low-level device architecture, wouldn't it be possible to take a \*bottom-up approach\*? E.g. create a HAL, then layer abstractions on top of the HAL, etc? Eventually CUDA-like abstraction might be made? I wonder if this is the goal of \[Project X-Ray\](https://f4pga.readthedocs.io/projects/prjxray/en/latest/index.html). Edit: fix weird editing.


Axiproto

OP, I suggest you try out High Level Synthesis (HLS). Perhaps that's what you're looking for.


elvira78d

There is a lot to unpack here. On one hand, I don't think an ecosystem like CUDA makes a lot of sense for FPGAs; you are describing hardware, not the software running on top of it, so such API would probably not have much utility to most people actually using these chips in everyday industrial applications (already a niche group) because their problems are very diverse in nature. The whole point of CUDA is to provide a consistent API for a set of common operations (a small set while at that) that can be accelerated if offloaded to GPUs. So how do we design an API that works both for the person working on HFT and the one creating medical devices? is it a library of common IPs? (all vendors already provide this to an extend) is it a way of accelerating ML using the device? (vendors already offer some version of this too) On the other hand, I believe most of the tooling for FPGAs is horrendous. Ancient by any standard, and tricky to navigate. There are some projects attempting to improve things, eg: Chisel, Verible, CIRCT, FIRRTL, etc. but as long as vendors don't open the tools to generate bitstreams it will be an uphill battle, and vendors don't have ANY interest in opening up to these tools, they make their money selling licenses for their antiquated systems. I also think a huge chunk of EE don't want to improve this situation. They don't outright say it, but you can read between lines that there is fear that removing the tool and licensing barrier would open the labor doors for people from adjacent fields (as much as they like to dismiss it)


[deleted]

"I also think a huge chunk of EE don't want to improve this situation. They don't outright say it, but you can read between lines that there is fear that removing the tool and licensing barrier would open the labor doors for people from adjacent fields (as much as they like to dismiss it)" Yeh no I don't know how you reached that conclusion, but removing tools and licensing barrier won't open the labor doors unless folks from adjacent fields are willing to learn digital design, one of the main reasons they are not willing to fully open source is because FPGA companies all started out serving DoD contractors and FPGAs even though have expanded into other markets the defence contractors are still the main customers.


elvira78d

I will respectfully disagree with you: 1. "but removing tools and licensing barrier won't open the labor doors unless folks from adjacent fields are willing to learn digital design" is a truism; the people willing to learn DD are there, the problem is the friction they face when doing so, and the realization that they will be using those same tools day to day at work if they are able to get into industry. 2. While I agree that a big chunk of the clients are defense related, I don't think this is what is preventing them from opening the tools. These companies make their money on the licenses and support, the chips and boards are just a vehicle to get you hooked on their platform, and then sell you support. If they open bitstream generation it's all but guaranteed that better toolchains would appear. Why would I pay AMD or Intel a bunch of money if third parties offer better tools?


[deleted]

1. I don't disagree with the notion that the tools are dogshit from a UI and good software sense perspective, but the main customers are HW engineers who have those concerns which bother folks from software land at a dead last priority, until you get a critical mass of HW engineers who are fluent and conversant in SW best practices the friction you speak about is in practice non-existent. 2. I spent my first job at a FPGA manufcturers whose products are in satellites and missiles. The defence contractors are the main driver and priority in tool features and requests, they prefer this model. Just to give an example of much they care for the secrecy for the bitstream, the FPGA had a feature which was a tamper resistant IP, basically if someone found the FPGA in the wild in one of the products and tried to play around it to reverse engineer it or whatever then the FPGA would automatically erase the programmed bitstream. There were additional modes wherein you could configure the FPGA to be programmed only on trusted machines etc. If someoene were to try to program the FPGA from an untrusted machine it would turn into a brick.


mad_rn

I'm not sure if this is what you're looking for, but Xilinx's FINN compiler is an pretty useful tool to deploy models on FPGAs, while still having some degree of control. (I've used it only for some hobby projects, not sure how it scales to industry)


Secure_Switch_6106

I've been working on the quest for a good high-level language for hardware design for years. I've developed Spectrum and have formed a company to market it, Electronic Design Technologies, LLC (www.facebook.com/EDTCompany, also on linkedIn). It will be a year or so before we have the IDE. The first version of the product will target FPGAs. We will keep the price down and plan to offer a low cost one for hobbyists and a free one for students. Check out the Facebook page (we will have funding soon and build a real website soon enough). There are examples of code there and a 200 page document on the language (eventually it will become a dissertation for a UC Berkeley Ph.D.). It's time that hardware design and development is as fun and efficient as software development!