能否在摆放元件时,设置规则使元件不能重叠?就像走线可以设置间距就不会重叠,能不能元件也这么设置呢?感觉在摆放元件时会有用。 或者大家摆放元件,布局的时候都是怎么做的呢? 将GRID调大吗?还是用skill先谢谢了
2014-12-19 11:21
However it does not seem to reset as expected. Could someone nudge me in the right way of how
2019-07-10 10:46
你好。在将DSO-X 2012A更新到最新固件后,我发现了一个相当奇怪的问题,不确定它是由WFG造成的,还是由于我身边缺乏咖啡。使用500欧姆电缆将WFG-OUT连接到CH1-IN,将WFG设置为1kHz方波,Hi-Pot = 1V,Lo-Pot = 0V,阻抗= 500欧姆。在测量信号时,示波器向我显示2Vpp的信号,而不是预期的1Vpp。连接显示测量信号,仍显示WFG设置。所以问题出现了,如果我宁愿上床睡觉并称它为一天,还是应该这样? 以上来自于谷歌翻译 以下为原文Hello. After updating the DSO-X 2012A to the latest firmware, i found a rather curious problem, unsure if it's caused by the WFG, or by a lack of coffee on my side. Connected the WFG-OUT to the CH1-IN using a 50Ohms cable, set the WFG to 1kHz square wave, Hi-Pot=1V, Lo-Pot=0V, Impedance=50Ohms. On measuring the signal, the scope shows me a signal with 2Vpp, instead of the 1Vpp as expected. Attached the display of the measured signal, with WFG settings still shown. So the question arises, should i rather go to bed and call it a day, or is this supposed to be this way?附件wfg_bug_dsox2012a_01.png11.9 KB
2019-04-25 15:49
大家好!我最近有幸能够部署带有两个K2卡,VMware 6.0和XenDesktop / XenApp 7.6的Dell 720。我必须承认,我对这个组合的表现印象深刻。它不会击败定制的四路SLI装备,但对于企业/商业环境来说,它非常棒。我的问题是,有没有人在两张不同的卡片之间有性能规格(“真实世界”)?甚至是直觉?我们可以获得另一台服务器,并希望了解更高用户密度的K1卡的可能性。只是看看板子的规格告诉我它会慢一些,但是如果不踢轮胎就很难知道“慢多少”。任何信息将不胜感激!当我刚开始搜索这些信息时,我会发布我发现的任何信息。以上来自于谷歌翻译以下为原文Hello All!I've recently been fortunate enough to be able to work on deploying a Dell 720 with two K2 cards, VMware 6.0 and XenDesktop/XenApp 7.6. I must admit, I am impressed with the performance I have received with this combination. It won't beat out a custom quad SLI rig, but for an enterprise/business environment, its great.My question is, does anyone have performance specs ("real world") between the two different cards? Or even a gut feel? We're in a position to get another server, and wanted to look at the possibility of K1 cards for a higher user density. Just looking at the specs of the boards tells me it will be slower, but its hard to know "how much slower" without kicking the tires.Any information would be greatly appreciated! I'll post any information I find as I go along as I'm just beginning my search on this information.
2018-09-04 15:18
大家好,我看到以下MAP警告,我实现了我的设计。虽然我认为这并没有伤害我的时间,但我仍然想要了解为什么它首先产生。其中有16个。警告:包装:2515-LUT-1变频器“USB_UNIT4 / rdi_d1_RNIKV6E”无法加入与输出缓冲器“FD_iobuf [1] / OBUFT”匹配的OLOGIC补偿。这可能导致时间不理想。 LUT-1逆变器USB_UNIT4 / rdi_d1_RNIKV6E驱动多个负载。我正在尝试实现以下内容://开始逻辑总是@(posedge Ifclk)开始if(~nReset_)RdI else RdI结束总是@(posedge Ifclk)开始rdi_d1 rdi_d2
2018-10-15 11:54
您好,我想使用一对E1437A ADC进行高达8 MHz的频谱分析,并可选择显示交叉频谱。由于我有限的VEE经验,这看起来像是一项艰巨的任务,特别是因为我还不熟悉VXI乐器。我可以从哪些程序模块开始?即使是E1437A的波形采集也会非常有用。谢谢,阿德里安 以上来自于谷歌翻译 以下为原文Hello, i would like to use a pair of E1437A ADC's for spectrum analysis up to 8 MHz, with the option to display cross spectrum.Due to my limited VEE experience, that looks like a massive task, especially as I'm not yet familiar with VXI instruments. Are there any program modules available that I could start from? Even waveform acquisition for the E1437A would be very helpful. Thanks, Adrian
2018-10-10 17:12
你好我正在使用Xilinx Spartan3E入门套件。我可以单独控制ADC和DAC部件,但是我无法将两个模块连接在一起。直通系统不起作用。我想找到类似的代码与我的相比较但我只是找到了信号发生器的picoblaze代码。我想知道Xilinx是否为ADC和DAC提供了直接通过系统的样本代码。因此我可以从中找到并在我的代码中找到错误。 或者任何在ADC和DAC直通系统中取得成功的人都可以给我一些建议。 大卫以上来自于谷歌翻译以下为原文HiI am working with the Xilinx Spartan 3E starter kit.AndI can controlthe ADC and DAC part seperately ,but I fail to connet two modules together.The straight-through system does not work.I would like to find the similar code to compare with mine.But I just find the signal generator picoblaze code. I wonder whether the Xilinx has provided any sample code for the ADC and DAC straight through system.Therefore I can learn from it and find theerror in my code.Or anyone who has succedded in ADC and DAC straight through system can give me some suggestions. David
2019-05-28 13:15
Tools-Options-Design-Nudge中进行设置7.如何显示和关闭钻孔?无模命令 do;或Tools-Options-Routing-Options中勾选上show drill holes8.如何保护一些特殊的走线,不让
2019-02-15 02:39
当我有一个ILA核心存在时,我的设计通常会失败,我在程序框图中标记了网络上的调试。我注意到当Vivado使用调试向导修改xdc文件时,存在这种约束set_property C_CLK_INPUT_FREQ_HZ 300000000 [get_debug_cores dbg_hub]为什么时钟频率如此之高?这会导致我无法满足时机吗?我可以手动更改吗?我应该说,当我的设计中有多个ILA时,例如我正在调试的多个时钟域,这种情况会更频繁地发生。以上来自于谷歌翻译以下为原文Quite often my design will fail timing when I have an ILA core present where I've marked debug on nets in my block diagram. I noticed that when Vivado modifies the xdc file by using the debug wizard there is this constraintset_property C_CLK_INPUT_FREQ_HZ 300000000 [get_debug_cores dbg_hub] Why is the clock frequency so high? Could this be causing my inability to meet timing? Can I manually change this? I should say this happens more frequently when I have more than one ILA in my design, e.g., multiple clock domains that I'm debugging.
2018-10-29 14:12
原谅我的无知,但我是一个来自软件背景的FPGA的n00b。就像这张表上5000张其他n00b海报一样,我想使用我的spartan-3e的以太网功能。没有人在网上实际上有一个现成的复制/粘贴解决方案,这对我来说没问题。我会做的工作并弄清楚。我已经完成了很多功课,但总的来说我对一件事感到困惑......我知道有两种基本的方法可以使用以太网控制器:使用xilinx IDE提供的免费IP核,或者自己编写VHDL / verilog并正确地敲出MII接口。后一种方法显然是一种皇家的痛苦,因为你必须计算校验和,否则操纵TCP / UDP数据包的可能性最低......我的问题是:我的应用程序要求以尽可能低的开销来处理数据包。如果我不需要,我甚至不想等待一个额外的时钟周期。我的理解是,你放在FPGA上的任何*核心*实际上都是一个“软核”处理器,换句话说,存在来自这些不同IP核的强大功能,因为FPGA是用VHDL指令编程的,模拟比如说一个8086 CPU基本上运行用C或程序集编写的虚拟8086程序,因为使用这些语言要比VHDL容易得多。那是对的吗 ?如果是这样,这是否意味着我希望我的整体网络性能比我自己直接写入处理的速度慢?如果没有,那么“核心”可以与直接VHDL程序共存,就像我链接到C中的外部符号一样吗?或核心消耗整个芯片?感谢您抽出宝贵时间回复n00b。我很感激。-n00b以上来自于谷歌翻译以下为原文forgive my ignorance but I'm a n00b to FPGA coming from a software background. Just like 5000 other n00b posters on this form, I want to use the ethernet capabilities of my spartan-3e. Nobody online actually has a ready-made copy/paste solution for this, and that's ok by me. I'll do the work and figure it out. I've already done a lot of the homework, but I'm confused about one thing in general... I'm aware that there are two fundamental ways to use the ethernet controller: using the free IP core that xilinx's IDE offers, or writing VHDL/verilog myself and correctly banging out an MII interface. The latter method is a royal pain apparently because you have to calculate checksums and otherwise manipulate TCP/UDP packets at about the lowest level possible... My question is this: My application requirespackets to be processed with the lowest amount of overhead possible. I don't even want to have to wait one extra clock cycle if I don't have to. My understand was that any *core* that you put on an FPGA is effectively a "soft-core" processor, in other words, the robust functionality from these various IP cores exists because the FPGA is programmed with VHDL instructions that simulate, say, an 8086 CPUthat basically run a program written in C or assembly for this virtual 8086, since it's a lot easier to use these languages than VHDL. Is that correct ? If so, does that mean I would expect my overall network performance to be slower than if I wrote the processing myself in straight-vhdl ? If not, then can a "core" coexist with a straight VHDL program, the same way as I would link to an external symbol in C ? Or does the core consume the whole chip ? Thank you for taking the time to respond to a n00b. I appreciate it. -n00b
2019-05-31 03:31