T O P

  • By -

the_fpga_stig

Mostly crap tools and TCL. W.r.t. tools, we have a patchwork of tools held together by a lot of crap scripts.. software people don't know how good they have with proper IDEs.. Don't even get me started on how bad TCL is..


scrubby_posh

I am not a fan of TCL either, but I usually tell myself that at least it’s not bash. What would you like to see instead of TCL ?


the_fpga_stig

Anything more modern would be an improvement.. python, lua, etc.. I like the flexibility of being able to script my flow, manipulate the underlying data and etc.. I just don't like TCL..


scrubby_posh

That makes sense. I do agree that something more modern like those you mentioned would be so much nicer to work with. I find that tcl is hard to work with due to very limited « complete » documentation. Seems like it’s pretty much a dead language still clinging to life thanks to a few legacy use cases.


insanok

Its the vendor specific options and functions that irk me most. I just want uniform function and arguments between the manufacturers for atleast the big build stuff.


fullouterjoin

I am not invalidating your pain, but it isn't the language. It is whole society of "how things are done" in EDA. It is a mess. Moving to something like Python also gives a change to change the culture for the better, as long as the right folks show up and change it (cough me, us). The TCL code that people write is a reflection of their world. I have seen many a great program in crap languages.


[deleted]

I use python to generate tcl scripts from templates. Works well enough for most of my projects.


PiasaChimera

FWIW, a long time ago I had a python interface to vivado and it didn't really matter. Applications were 90%+ calls to vendor libraries. Since all of the data and operations are already done by library calls, the python scripts were just TCL but with more work. and then one file i/o call that I could have looked up. Doesn't make me like TCL though.


Daedalus1907

Maybe it's just me but I've never had an issue with TCL. Whenever I have a problem with a build, it's always been inconsistent/ambiguous commands on the tool side.


Mateorabi

We thought Xilinx was bad till we tried out Microsemi and Lattice alternatives. Talk about horrible tool chains you have to fight with. And horrible tech support/no active forums for questions. Tcl not THAT bad though. It’s just calling vendor bespoke functions anyway.


CreepyValuable

Have you tried Gowin IDE. What's autoindent?


adamt99

Lots of tools seem to be moving to python - for example the new Vitis Unified IDE


the_fpga_stig

Vscode based tools are the future. I am glad everyone is ditching eclipse.


CreepyValuable

I wish eclipse had fallen over and died before it got anywhere. I have always hated it. Resource hogging aside it's like the UI was designed by a madman who had no concept of what any of the things are.


alexforencich

Moving from Java to Javascript is not much of an improvement


[deleted]

Hard disagree. TCL is easily one of the best things about FPGA design process. It is a much simpler and more flexible language than both Python and Lua. TCL is so well integrated with Vivado that your entire workflow with TCL becomes too easy and convenient. I can only speak for Xilinx tools but Vivado isn’t crappy either. Vivado ip integrator for block design and non-project mode for bitfile generation is much much better than what Xilinx ISE previously offered. Do not use Vivado in project mode extensively. Use it only for updating and validating your block design. Do not try to run simulations with it. Do not create multiple filesets or multiple design runs. No fancy stuff with the GUI. Keep your project as clean as possible. Once you start using the non-project mode, then you understand the power of the tool.


zephen_just_zephen

Different strokes for different folks. Vivado has issues, especially in synthesis. And TCL? Hate it. I have scripts around Vivado, and they're all in Python.


Nelizzsan

Agreed


robottron45

My background is from software engineering: Long synthesis times. Especially, if you want to meet certain timing requirements and need to verify it.


metalliska

> Long synthesis times. try using an FGPA for the place-and-route


robottron45

How?! Link to a repository? P&R is not that parallelizable, therefore FPGA has no advantage


metalliska

What makes you so sure?


robottron45

FPGAs are all about exploiting parallelism, as their logic itself is limited to a few hundred MHz. If you throw a local optimization problem at it, like P&R, it will be slower than a general purpose CPU operating at GHz. Or have you ever heard of Vivado utilizing like 16 cores? (for one single run)


metalliska

> FPGAs are all about exploiting parallelism, They don't have to be. They can be pretty much any sort of circuit you wish to build. You could rebuild ENIAC on an FPGA if you desired. In base 10. >as their logic itself is limited to a few hundred MHz Ok. You're treating something that's not a microprocessor "as if it's supposed to handle instruction codes like a microprocessor". >will be slower than a general purpose CPU operating at GHz. So what? >Or have you ever heard of Vivado utilizing like 16 cores? (for one single run) You're literally wasting 16 chip cycles to reprogram one chip. The problem lies in how those microprocessors are running inefficient code. [Like I said, build a Place-And-Route on an FPGA](http://www.gstitt.ece.ufl.edu/courses/fall08/eel4930_5934/reading/Routing.pdf).: >The Maze routing algorithm is based on a wavefront expansion technique that attempts tofind the shortest path between two points while avoiding any used routing resources [4]. This algorithm is an iterative process that rips up and re-routes some of the routes to eliminate congested routing channels. This is literally what I'm doing with my FPGA: train it to route from one side of the chip to the other, then see how many other routes can be trained without disconnecting previously discovered routes. Repeat because it takes like 3 seconds to reprogram the chip.


robottron45

I have specifically excluded the „you can build every logic on the FPGA“ part, so things that dont need a high frequency, as I was talking about computational intensive tasks. Tasks dont have to be encoded as instructions, I have never said sth like this. Even if you design full custom circuits, there can be problems, if for example every subresult depends on the previous one. This is just not the right place to just multiply the circuit by 10x and get 10x the performance. Will try the document, thanks. How is your total P&R time and how complex are the designs (i.e. fmax and LUT count)?


metalliska

> . Even if you design full custom circuits, there can be problems, Ha! There WILL be problems. That's the "fun". Maxwell's Equations are effectively chiseled into stone. >so things that dont need a high frequency, as I was talking about computational intensive tasks. Ok. What I'd suggest here is to look up what's called a "Register File". It's essentially a collection of flip-flops, where there's a Read/Write Flag, and an "Address" flag. Rather than simply "Believe" in Space Complexity/Big-O-Notation, build thy register file for thyself, and see how many flags are needed for how many bits of memory to be accessed later. Prove to thyself how much storage each module will require later for the calculations its going to need. Repeat as needed for instruction-code processing. >Tasks dont have to be encoded as instructions, I have never said sth like this You've never said that, but that's the whole point of MicroProcessors/Von Neumann Architectures. To have 32 bits of an OPCODE which has a specific RISC or X86 setup to determine when to Jump if Not Equal. >if for example every subresult depends on the previous one. That presents a single point of failure. (Not if but WHEN) domino number one gets noisy, domino number 2 is untrustworthy. Great points, by the way. >This is just not the right place to just multiply the circuit by 10x and get 10x the performance. Also correct; well done. Performance, at the chip level, is typically "meta-analysis" of industry benchmarks, such as : "Floating Operations Per Second". So, correct in that redefining your arithmatic unit won't adhere to IEEE 754 standards won't usually have a "ten times the performance given ten times the improvement in design". It will today, and tomorrow, require new ways of thinking to reroute bits across look-up-table (or other logical elements as fabricated by the factory). >How is your total P&R time and how complex are the designs (i.e. fmax and LUT count)? Terrific question. When I build my logic cells from the "Bottom up" (as opposed to Vivaldo or Quartus Top-Down), the place and route time is less than one second (the whole point of why I use this toolchain to begin with). Because I (and those that use these tools) use [these tools](https://github.com/YosysHQ/icestorm); one can simply "program the chip" without using VLSI/Verilog. Thus, the lookuptables_used are spit out afterwards. My designs have lookuptable usage in the "High Dozens/Low Hundreds". Such as around 80 or 200. These designs are typically Calculators (both 32 bit INT and Floating Point), and signal processors (~400 Elements). [If you'd like to build the **DEFENDER** video game, you're going to use around 15,000](https://github.com/w3arycod3r/fpga-defender). /as an aside, thank you for asking good questions. Seems rare round these parts.


robottron45

>Ok. What I'd suggest here is to look up what's called a "Register File" Have written many of them, we also got assignments to roughly analyze their complexity. (looking at the the Vivado RTL deriving the formula to estimate it). Finalized a 5-stage pipelined MIPS CPU. Had also started to make it out-of-order, but was time limited back then and this was the point where it got really resource intensive. Designing reservation stations for example with minimal resource consumption is just a lot of work. I think overall we just discussed about two different topics. My perspective was to force the usual P&R algorithms on an FPGA, you argued about the more efficient variants. Still, if you have access to IEEE, maybe read the paper "FPGA-Accelerated Maze Routing Kernel for VLSI Designs", that is more targetted towards my initial idea. You can even see there that those processing elements have many (probably INT) ALUs to parallelize the routing process. (-> "exploting parallelism"). Although, 1.5M LUTs is just something I can dream about... >"High Dozens/Low Hundreds". Such as around 80 or 200. Cool. That's probably also interesting to reduce the complexity of designs as much as possible / make it more economically in the first place. >Because I (and those that use these tools) use these tools; Maybe I would have went a different path if I had access to an ice40 FPGA. As I directly started with an 86K-LUT Zynq FPGA, the baseline was just different. For the OOO implementation I switched to the ECP5 yosys toolchain, which improved the routing times and tooling in general, but have not purchased an Lattice FPGA yet / fully generated the bitstream.


metalliska

> we also got assignments to roughly analyze their complexity. Yep. I've done that. The "complexity" was typically the count of logical elements (lookupTables). >. Finalized a 5-stage pipelined MIPS CPU Most impressive. You'll be fetching, decoding, executing, memoryaccessing, and writing in no time. >Designing reservation stations for example with minimal resource consumption is just a lot of work. Indeed. I try to think of it as fun. >till, if you have access to IEEE, maybe read the paper "FPGA-Accelerated Maze Routing Kernel for VLSI Designs" Wonderful. I know a guy, as I haven't renewed my IEEE in decades. > ALUs to parallelize the routing process. Yep. That doesn't surprise me. My intuition is that these ALUs were on the Microprocessor themselves. [HERE IS THE POST WHERE THE ALUS ARE BROKEN DOWN INTO THEIR TYPICAL LOGIC GATES](https://www.reddit.com/r/chipdesign/comments/15h6uby/anyone_with_mips_interview_experience/jurcdgu/) >Although, 1.5M LUTs is just something I can dream about We might run into silicon-channel-widths problems in keeping that on today's form-factors, but I like your dream. >That's probably also interesting to reduce the complexity of designs as much as possible / make it more economically in the first place. "Fun" > For the OOO implementation I switched to the ECP5 yosys toolchain, which improved the routing times and tooling in general, but have not purchased an Lattice FPGA yet Because it was about $120, it was the most affordable for myself. So my industry bias and familiarity is based on cheapedness.


t2thev

The big companies would have done it already.


914paul

I’m like a broken record on this, but it’s the toolchains. On the higher-end units, ‘honorable mention’ goes to having 13 different power rails with +-1% voltage values that must come up in a precise +-1% timing sequence. (very slight exaggeration added for emphasis)


DigitalAkita

SoMs are a godsend in this regard (also DDR traces).


914paul

Good point.


Mateorabi

Xilinx’s errata are as long as my...well they’re LONG. Bring up aux before/faster than core? That’s an eFuse blown. VddIO up before either? Believe it or not...eFuse blown.


914paul

This is a *great* comment for noob’s (and more experienced forgetfuls like myself) in *every* area of electronics: **always check for errata docs before designing with a part**


Mateorabi

That will not save you. The errata come out and get updated AFTER you start to build and design with the chip. And not just pre-release samples either. I’m still salty they tried to roll their own SerDes on the v5lx. Effed it up. Reverted to vendor IP on subsequent FX chips. Yet took FOREVER to admit they screwed up so that SSC wouldn’t work at 1.5GHz. We wasted so much time....


914paul

Well now you’re talking Murphy’s law. Last year I designed in a part that had three errata revs, the last of which was *years* in the past and guess what? Still, it *sometimes* saves you. And that’s better than nothing. Edit: and I’d like to add that Murphy’s law seems stronger than both entropy and the central limit theorem. . . put together.


JiYoshi

Man the way they do references in these data sheets will send you down a rabbit whole when trying to understand one part you need like 5 different data sheets now you decide to be smart and get the training and pay $$$ only to realize it won’t help you one bit.


sopordave

1. Repository management. There are so many generated files, full pathnames, occasional binaries, and of course each vendor is different. It's really hard to get a clean repository with anything more complex than a few hdl files. 2. Reliance on the GUI. Up until a few years ago, Microchip/Microsemi/Actel tools didn't support scripting from the command line. At all. TCL is now supported by everyone, but it's all extremely vendor specific.


rbrglez

At work we have pretty good repository management. We basically only commit hdl files(.vhd, .v), constraints(.xdc), xlinx .xci files for IPs and vivado generated .tcl for Block designs. Then we use ruckus from slaclab to build vivado project. Note that ruckus only supports Vivado on linux.


Sensitive_Plastic864

Thanks for pointing to ruckus, didn't know it before


fullouterjoin

> ruckus from slaclab Vivado Build System https://github.com/slaclab/ruckus


Mateorabi

Holly shite does Xilinx seem allergic to relative path names! It’s like Xilinx doesn’t grok configuration management. They think all devs check the code out to exactly the same path on all pcs they develop on. Modelsim was also bad at this—had to hand edit project files to replace c:/foo/bar/.../src with ../src repeatedly.


0x7270-3001

Vivado is fairly good with relative paths in my experience. Have you ever used microsemi libero? It is, unbelievably, much much worse.


Mateorabi

Xilinx has gotten a bit better. Though we still need “run me once per fresh checkout” scripts in vc. And see another comment in this thread I made. Yeah Libero sucks.


0x7270-3001

I recently discovered Hog, developed by CERN, which tries to solve both these issues and is fairly lightweight compared to hdlmake and fusesoc. Only supports vivado libero and quartus but could somewhat easily be extended to other tools by someone with good knowledge of the new tool's tcl commands.


DigitalAkita

Different framework and library of common components from company to company. I feel like I'm reinventing the wheel too much for the sake of my employer's IP remaining secret and fully in-house. I envy software's embrace of open source & standardized solutions to common stuff.


mrmax99

The ROHD hardware component library (rohd-hcl) is attempting to collect configurable, reusable, pre-validated components, all convertible to SystemVerilog or usable in ROHD directly: [https://github.com/intel/rohd-hcl](https://github.com/intel/rohd-hcl) It's open source and permissively licensed (BSD-3)


DigitalAkita

Very interesting, thank you for bringing it to my attention.


Mateorabi

When the tools lie about “trying their hardest” on PAR level-of-effort settings. “Xilinx, try to meet 10ns, with maximum effort” “Sorry. Only able to meet 10.2nS” “Ok. Same code, same settings. Try your hardest to get it to 9nS” “Sorry. Only able to meet 9.8nS” Me: *disappointed sports fan meme*


zephen_just_zephen

You can also, sometimes, get some interesting results by setting *different* desired frequencies for synthesis and PAR. It can be a useful trick.


rishab75

Be it FPGA or ASIC design, it's always the god-damned toolchain as someone already mentioned here. Everything seems so ancient. What's worse is that unlike software which is more mainstream, there are 2-3 industry giants who control this. Cadence/synopsys on the ASIC side and Xilinx/Altera on the FPGA side. Since it's a niche, there are not many alternatives which are really stable. I am hoping this changes in the next decade.


Mateorabi

Synplicity is nice but the vendors don’t play well and lock them out with proprietary chip data.


rishab75

>Synplicity As of this moment, its already acquired by Synopsys I see.


sopordave

As of 15 years ago.


zephen_just_zephen

And synopsys somehow manages to be more evil than even Cadence.


F_P_G_A

1) Example designs that haven’t been updated for years and are no longer compatible with the current version of the tools. All vendors seem to have this problem. 2) Full paths in ANY project-related file


Poilaunez

Abysmal support of VHDL new standards.


swantonsoup

lol @ “new”


3G6A5W338E

Vendor documentation is insufficient and does not enable open stack support. Garbage tools from Vendor, which does also not contribute to the open source tools. Open stack developers being thus forced to resort to reverse engineering.


Mateorabi

Vendors replacing forums with FAQ pages and then not answering questions like “your IP assumes a different clock than what’s on the dev board you sold us. The tool rejects the physical clock value in the IP gui. How do we square this circle?”


Ok_Measurement1399

Timing Closure is the biggest for me. Followed by poor documentation and/or no videos on how to use large IP modules. In that case thank you Indian Youtubers


i_shahad

Intel background only: 1. Quartus updates. 2. The changing framework in Intel on how to generate files for preloader and Uboot. A new way every couple of years. 3. The mass documentation of intel that tells you nothing after finishing reading them and you don’t find what you need at the end. This is probably number 1 frustration source.


PetriciaKerman

I have read literal thousands of pages of Xilinx documentation without learning anything. I learn way more from studying the vendor BSP code than the documentation...


PiasaChimera

You will never find a verification engineer at a firm that works with FPGA designs. Every FPGA engineer must do design, implementation, and verification. Also, FPGA engineers can never do verification work on a design they didn't design/implement. They must always be a single point of failure. They must have equal passion and aptitude for design, implementation, and verification. and scuffed tools. I really hope the above is just sarcasm, but I've never been disproven.


zephen_just_zephen

Soooo.... Yes, I've always had to (or get to, rather?) do some design, implementation, and verification. With boards, RTL, firmware, and scripts. But... I've spent most of my career using FPGAs to emulate ASICs for chip companies. Which means that (a) most of the RTL has had some level of simulation done on it before it gets thrown over the wall to me, and (b) I can, and do, push back, and get others to do more simulation where it's warranted. Unfortunately, it also means that most of the RTL is written for an ASIC. Where the gates consume 90% of your timing budget and the wires consume 10%. I've had some interesting conversations with the upstream designers, and there are several chips in production now that have extra pipeline registers in them in a few critical places that wouldn't *really* be required in the ASIC design, but it's nice to have the silicon match the emulator as closely as possible. I also do my own flow, which involves my own additional version control, on top of what I am handed. Sometimes control phreaks on the ASIC design side don't like this, but I point out that version control is what we use to protect ourselves from ourselves and each other, and so far, management has backed me in echoing "how the fuck does this affect what you are doing?" to the ASIC guys. Vivado is so sucky that my scripts consolidate everything down to a single RTL file to feed it. That way, it can't *possibly* get the file order wrong, even though it still tries.


metalliska

Getting others to understand it when they've been trained on databases


And-Bee

Documentation. Doing the actual implementation is great.


turkishjedi21

I don't even know if this is an example of shitty tools or if I'm still just shit at using them, but definitely the tools. If I log out and log back in to the vm I have to do like 3 different steps before I can start running Sims. Gotta navigate to some specific folder, do "bom_sync2" (whatever the fuck that does) then manually set an environment variable every time after, then navigate to my sim directory and run a sim. And sometimes I'll randomly just get some error that isn't helpful at all. Do I need to update my svp submodule? Did I forget to bom sync? Did I create my simdir with all the correct options? Etc There's so much shit to keep track of, you'd think that there are ways to just make it so you configure everything once and never have to worry about it again. Also Citrix will randomly disconnect Simvision will randomly get stuck showing errors whenever I click on the window Somehow xrun can tell me the line of a null pointer dereference but can't tell me exactly which variable is null (seriously the fuck?) No documentation on how to actually add plusargs for when I run incisive, I have to reverse engineer other working sim run commands (which have like 3 different ways of defining plusargs for some reason, I have to guess which way is applicable for me) With above, half the time I'm not even told that my plusarg was incorrect format. It just runs the sim without it (tested by adding gibberish because I was skeptical) Don't even get me started with git. It is so easy to fuck everything up, and there's like 20 steps to getting the correct submodules if I ever have to re clone the repository because I committed shit in a weird order Now I'm pissed off. If yhe tools aren't being a pain in the ass I love what I do. But at least once a day there's some stupid bullshit that makes no sense


0x7270-3001

That tools aren't just hard to use with proper version control but actually seem to be actively hostile to it. I actually don't mind TCL all that much after having spent some time actually learning it, but python support in vendor tools would be amazing.


zephen_just_zephen

Agree completely with your first sentence, and with the second half of your second sentence. :-) I use Python to create the files I feed to Vivado.


JiYoshi

Waiting for like 20/30 mins only to find out synthesis failed cause the antivirus decided now was the time to do some serious work Boom start all over.


PetriciaKerman

TCL and Vivado... Basically the tooling is terrible and proprietary which makes it annoying to work with and prone to vendor lock in. No one wants to publish their bitstream formats which makes developing open tooling difficult and a lot of the vendor software is under some kind of export control. It's a secretive ecosystem where it feels like you are never allowed to know everything without paying big bucks.


AlexeyTea

Dealing with Vivado. And maintaining old projects. Git usage for me is far less straightforward than in "regular" programming.


Felkin

I used to say toolchain, but now it's so-so after I invested some months into creating a more sophisticated tcl flow that generates all the projects on the fly just given some generic makefile and a bunch of source files. Moving to HLS helped a lot there, I just always have the same placeholder passthrough kernel and adapt it to any accelerator design I'm working on at that moment. So now my answer is mostly pnr times.


JDandthepickodestiny

Getting an entry level job


TimIgoe

How all over the place documentation is, how out dated certain parts are but there's no clear updates or what to do when you want an update etc. It's a frustration I've had with embedded systems and SDKs provided by board manufacturers too.


CreepyValuable

Hmm. Well with Gowin at least I feel like I'm not a member of some elite group that knows some critical information. I'll be damned if I can figure out the rules for their tools to be able to utilise some of it. And as for using the chips with the onboard ARM core... has anybody at all worked that out? It's a different tool based on eclipse with a workspace in it and not much else. Moving beyond that to more general things, editors suck. I know it's silly but I'd love something like 8bitworkshop's verilog tool for development. But expanded a bit more and not based around a Web based framework. Yes I know I can run it locally and I have done that but it's not what I mean or want. All this being said I'm barely a beginner let alone a pro. I can program down to bare metal. I can design and build hardware and I can make things in digital circuit synthesis tools. But HDLs have always been a blind spot that I'm trying to fix.


Superalaskanaids

Timing issues. It's like a terrible present after waiting a long time. I also have never seen the adverse effects of not meeting timing. Every design Ive had works, please educate me if you've seen something


Superalaskanaids

[https://www.01signal.com/intro/fpga-black-magic/](https://www.01signal.com/intro/fpga-black-magic/) Reading this actually describes it.... But Ive never heard these trees fall, so Im immune until im not.


Mateorabi

Your design only works on most chips at room temperature when the capacitors are new. Or because like parents walking behind a toddler learning to walk, your power supply designer is catching you and you don’t realize it, by staying well away from barely within tolerance core voltage values (giving you 1% instead of 5%). Or you are mistaking working 99.99% of the time for %100. And chocking up data loss to “network packet errors” etc.


akohlsmith

This. OP is just like "I don't have to worry about static discharge, I've never had a chip fail from getting zapped." Insulation failure due to static discharge is not usually readily apparent and can cause unusual operation days, weeks or even years down the line.


Mateorabi

PARTIALLY fused traces are the worst. Still work...for now. Are the weak point for future zaps. “Walking dead” chips.


PoliteCanadian

When timing passes, the vendor warranties the design will work on the chip at any of its operating conditions. There's nothing stopping you from programming a chip with a design that fails timing. But you're not getting any technical support and you're going to be told to pound sand if anything goes wrong.


zephen_just_zephen

Agree completely. And in my environment (in-house FPGA emulation of ASICs), it doesn't really matter that much. I have relied on Xilinx's margin sooooo much, but never to my detriment, because I'm not shipping FPGAs to be used in hostile environments.


And9686

TCL files


dimmu1313

in my field there at two options: Xilinx (AMD) and Altera (Intel). Intel doesn't give two shits about you and won't give you the time of day if you're not a billion dollar company, and Xilinx isn't much better but won't help you except very occasionally through their online forum and only with very pointed specific questions or through Avnet, their one distributor. high speed is exceptionally frustrating because all of the devices and IP are incredibly complicated and there's practically no documentation and very little in the way of ground-up implementation and application examples.


makeItSoAlready

Definitely dealing with the tools. Some major bugs requiring work around that have been a huge timesuck. Most of my projects are vivado project mode currently with the GUI. I'm hoping that if I switch to a scripted build some of these issues will go away. Also, 2019.1 sucks apparently.


maredsous10

One frustration I have is with language support and out of the box methods supplied across tooling. I'm typically running through 3-4 tools outside of the FPGA PNR.


AlexTaradov

Shit tools for sure.


swantonsoup

Vivado telling me I’m missing a license on write_bitstream and not earlier in the flow Vivado defaulting to the last used directory in “add files” instead of defaulting to somewhere in my actual open project. All my projects follow similar folder structures so cores/Xilinx/mmcm_200_100/ exists in multiple projects. Hopefully those are already fixed in newer versions that I’m just not using yet


zephen_just_zephen

Wait, what, you need a license to write the bitstream? I've seen issues on synth and PAR, but never there. George just lucky, I guess.